Twitter’s automatic cropping feature was supposed to automatically crop photos around the people in view.
Turns out that it overwhelmingly favors white women over everyone else.
When researchers fed a picture of a Black man and a white woman into the system, the algorithm chose to display the white woman 64 percent of the time and the Black man only 36 percent of the time, the largest gap for any demographic groups included in the analysis. For images of a white woman and a white man, the algorithm displayed the woman 62 percent of the time. For images of a white woman and a Black woman, the algorithm displayed the white woman 57 percent of the time.
Source: Twitter’s Photo Crop Algorithm Favors White Faces and Women | WIRED
AI-based systems do this a lot with photos. Meanwhile, billionaire owned social media applies the same AI-based techniques to text to find “misinformation”. Undoubtedly, those methods are also biased but we pretend otherwise.
AI-based systems are primarily based on pattern matching and machine learning. They train the pattern matching network by feeding it large amounts of data (photos or text, for example) and use the system to identify patterns. However, the “clues” that drive the pattern match may not be correct. For example, one photo processing system identified animals, sort of, in pictures. But the pattern matching was that pictures of wolves had a snow background – and most animals having a snow background became classified as wolves. In other words, the salient characteristics were incorrect, but that is how basic machine learning works.