Facebook Hacks its Artificial Intelligence Programs

Facebook Hacks its Artificial Intelligence Programs

Instagram encourages its billion users to add filters to their photos to make them more shareable. Some Instagram users, in February 2019, started to edit their pictures with a different audience in mind: automated porn filets of Facebook.

Facebook is currently heavily dependent on moderation powered by artificial intelligence. It has thus said that the tech is particularly good at spotting explicit content. Nevertheless, some users found that they could sneak past Instagram’s filters by overlaying patterns such as dots or grids on rule-breaking displays of skin. Thus, this meant more work for human content reviews on Facebook.

Artificial Engineers at Facebook responded by training their system to recognize banned images with such patterns. Nevertheless, this fix was short-term. Users started to adapt by going with different designs. That is what Manohar Paluri says. He leads the work on computer vision at Facebook. Eventually, his team tamed the problem of artificial intelligence-evading nudity by adding another system of machine learning. That system checked for patterns such as grids on photos.

Facebook

Facebook also tried to edit them out by emulating nearby pixels. The process does not entirely recreate the original. Nevertheless, it allows their porn classifier to do its work without having any trip ups.

A few months later, that cat-and-mouse incident helped prompt Facebook to then create an ‘artificial intelligence red team.’ The purpose of this was to better understand the vulnerabilities and blind spots of its artificial intelligence systems. Other large organizations and companies, including government contractors and Microsoft, are assembling similar teams.

In recent years, those companies spent heavily on deploying artificial intelligence systems. Those systems are in place for tasks such as understanding the content of text or images. Some early adopters ask how they can fool the system and not protect them. Mike Schroepfer is the chief technology officer of Facebook. He said that if their automated system fails at a large scale, that will be a big problem.