Artificial intelligence is often a great help for social networks. But errors can cause considerable damage. Facebook has now also had to experience this. Because the Facebook AI equated black people with monkeys.
One of the biggest problems with the use of artificial intelligence (AI) is prejudice. The technology is supposed to function like the human brain. But when AI is trained, prejudice is always formed. Thus, minorities in particular are often disadvantaged or discriminated against.
Facebook has now also had to experience the effects of such an offense, according to a report in the New York Times.
How the Facebook AI discriminates
In the environment of a video of the British Daily Mail, a discriminatory confusion of the algorithm occurred. Because the Facebook algorithm has confused black people with monkeys.
The video is titled “White man calls police about black men at the port”. After playing the video, user:inside were then asked whether they “want to continue to see videos about primates.”
This was a “clearly unacceptable mistake,” a Facebook spokeswoman said in a statement. The group had taken the corresponding software directly out of service.
We apologize to anyone who saw these offensive recommendations.
How was the racist Facebook AI discovered?
The present case was uncovered by a former Facebook employee. She posted a screenshot of it on a Facebook feedback forum. In response, a Facebook product manager in the video department declared the incident “unacceptable.” To that end, he promised to fix the problem.
Speaking to the New York Times, the ex-employee said fixing the racism issues was not important enough to Facebook.
Facebook can’t keep making mistakes and just say they’re sorry.
Facebook isn’t the only AI problem child
But Facebook is not the only social media giant to have problems with the pitfalls of facial recognition. Twitter, too, has to keep making improvements in this area.
At the beginning of May 2021, the short message service updated the photo cropping in its news feed. This decision was preceded by criticism of the cropping algorithm. This had shown white people more often than black people in the automated cropping.
This also prompted Twitter to have external experts research problems in its own algorithm as part of the Algorithmic Bias Bounty Challenge. The result was that the AI prefers slim, young faces with light skin.
Twitter wants to use challenges like the Algorithmic Bias Bounty Challenge to help build a community for ethical aspects of artificial intelligence. This is an approach that Facebook should also take as a model.