The latest craze making the rounds on social media is a facial recognition program called ImageNet Roulette. It allows users to upload a selfie after which artificial intelligence will classify the photo with the tag that it thinks is appropriate for the subject person in the image. Some think that the program is fun, and others think it is downright racist. It tends to classify anyone of any skin tone who uploads an image in poor light with tags like "Black, Black person, blackamoor, Negro, Negroid," according to numerous user reports.
Some of the tags it gives to people include things like "psycholinguist," "scientist," while other male subjects were labeled as "rape suspect," and other less-than flattering descriptions. Another troubling, stereotypical identification comes for men that are bald or balding. Those individuals are more often than not labeled as "skinheads," although it depends on the images in that category and how much the image you upload matches them.
Obviously, if you're a balanced, open-minded person that goes by the belief that you simply don't judge a book by its cover, these frivolous labels probably won't concern you as much. However, it's the bigger question of why the AI is seemingly slanted to levy these outrageous labels on people's faces, that might keep folks in the AI community up at night..
The folks behind the ImageNet Roulette program point out that they have nothing to do with the labels that are assigned to photographs. The categories are pulled from the ImageNet/WordNet database and the makers of ImageNet Roulette are clear that it often categorizes people in "dubious and cruel ways." The creators say this is because the underlying training data contains these categories along with images of people who have been labeled with those categories.
ImageNet Roulette was meant as a test case to show how politics propagate through technical systems, often without the creators of the systems being aware of them. The labels on the images come from Mechanical Turk workers (humans) that classified masses of images for very little money. Those who have delved into the ImageNet tag categories say that looking at the photos that go along with the tags hint at how the algorithm thinks.
For instance, many people labeled as "Phycholinguists" tend to be white and have uploaded an image that looks like a faculty headshot, notes LifeHacker. Anyone willing to risk being offended can try the tool for themselves here. One of our staff members in the office was already actually labeled a "skinhead" and was highly offended, obviously.
It reveals the deep problems with classifying humans - be it race, gender, emotions or characteristics. It's politics all the way down, and there's no simple way to 'debias' it.
— Kate Crawford (@katecrawford) September 16, 2019
Facebook keeps facial recognition data on it users, and we recently talked about how to remove that data. However, at least Facebook only puts a name with a face, and doesn't try to further categorize and stereotype individuals based on their looks.
(Top Image Source: Cisco)