LeCun’s Thoughts about Artificial Intelligence Bias

LeCun’s Thoughts about Artificial Intelligence Bias

Yan LeCun is Facebook’s Chief artificial intelligence scientist and a Turing Award Winner. He announced his exit from social networking platform Twitter. This was after getting in an often acrimonious and lengthy dispute regarding racial biases in artificial intelligence.

LeCun has often aired his political views, unlike most other artificial intelligence researchers. Moreover, previously he has engaged in public with colleagues such as Gary Marcus. However, this time, LeCun’s penchant for debate had him run afoul of what he termed as the linguistic codes of modern social justice.

Everything started on June 20 with a tweet. The tweet concerned the new Duke University PULSE AI photo recreation model. It had de-pixelated a low-resolution input image of Barack Obama into a photo of a white male. Brad Wyble is Penn State University Associate Professor. He tweeted that the image speaks volumes concerning the dangers of bias in artificial intelligence.

Nevertheless, LeCun responded that ML systems are biased when there is data bias. The face up-sampling mode makes everyone white because FlickFaceHQ trained the system mainly on pics of white people. LeCun then added that, if you prepare the same system on a dataset from Senegal, everyone will look African.

Timnit Gebru is a research scientist, technical co-lead of the Ethical Artificial Team at Google, and co-founder of the “Black in AI” group. She tweeted in response that she is sick of that framing. Many people and scholars tried to explain that you can not just reduce the harms caused by machine learning to dataset bias.

LeCun and Others

Moreover, she added that even during worldwide protests, people do not hear their voices and try to learn from them. People assume that they are experts in everything. Ruha Benjamin works at Princeton University. She is an Associate Professor of African American Studies. Gebru added that she must lead such advocates, and we must follow.

She is famous for fighting against gender and racial bias in facial recognition systems and other artificial intelligence algorithms. Gebru has been advocating for ethics and fairness in artificial intelligence for years. She leads the Gender Shades project with MIT Media Lab Computer Scientist Joy Buolamwini. Moreover, the project revealed that commercial facial recognition software was less accurate and had less accuracy with darker-skinned females compared to lighter-skinned men.

Gebru had a talk in 2020, “CVPR Computer vision in practice: who is benefiting and who is not?”. There, she again addressed the role of bias in artificial intelligence. She said that she thinks that now a lot of people understand that they need to have more diverse datasets. She adds that she thinks that this would be some kind of fairness and ethics. Nevertheless, you cannot ignore structural and social problems.

LeCun replied that his comment targeted the case of their dataset and the Duke model. LeCun continued that the consequences of bias are considerably direr in a deployed product than in an academic paper. Moreover, he added that it is engineers who must be more careful and not machine learning researchers.