Machine Learning: Incident to Fool Artificial Intelligence

Machine Learning: Incident to Fool Artificial Intelligence

Recently in understanding language, artificial intelligence has made significant strides. Nevertheless, it can still suffer from a potentially dangerous and alarming kind of algorithmic myopia.

Research is showing Artificial Intelligence that analyzes and parses text might be confused and deceived by carefully crafted phrases.

A sentence that sounds straightforward for humans might have a strange ability to trick an artificial intelligence algorithm.

It is a massive problem because text-mining artificial intelligence programs are for judging job applicants, process legal documents, or assess medical claims.

Strategic changes in a handful of words can let fake news evade the artificial intelligence detector, trigger higher payouts from health insurance claims, or thwart artificial intelligence algorithms hunting for signs of insider trading.

Di Jin is a graduate student at MIT. He developed a technique to fool text-based machine learning programs. Jin developed this technique with researchers from Singapore’s Agency for Science, Technology, and Research and the University of Hong Kong.

Di Jin says such an adversarial example might prove especially harmful if used to bamboozle automated systems in health care or finance. Even a minimal change in these areas will cause a lot of troubles.

Jin and colleagues created an algorithm called TextFooler. It is capable of deceiving an artificial intelligence system without changing the meaning of a piece of text.

The algorithm uses artificial intelligence for suggesting which words might be open for a change into synonyms to fool a machine.

Artificial Intelligence

Researchers tested their approach by using data sets and several popular algorithms. Thus, they were able to reduce an algorithm’s accuracy from above 90 percent to below 10 percent. People who judged the test said altered phrases had the same meaning.

Artificial Intelligence

Machine learning is working with finding subtle patterns in data. Many of those are invisible to humans. Thus, the systems based on machine learning are vulnerable to a strange kind of confusion.

For example, the program of image recognition might be perceived by an image that looks entirely natural for the human eye. Subtle tweaks to the pixels in a picture of a helicopter can trick a program into thinking it is looking at a dog.

Nevertheless, most deceptive tweaks might be identified through artificial intelligence with the use of process related to the one used to train an algorithm in the first place.

The extent of this weakness is still being explored by the researchers, alongside with its potential risks. Vulnerabilities are mostly at speech recognition and image systems.

When algorithms are for making critical decisions in military operations and computer security, using machine learning to outfox artificial intelligence may have serious implications. It can also apply to anywhere where an effort needs to deceive.

The Stanford Institute for Human-Centered Machine Learning published a report last week. Among other things, it highlighted the potential for adversarial examples to deceive machine learning algorithms. It suggested it might enable tax fraud.

Meanwhile, artificial intelligence programs became much better at generating and parsing language. There are many challenged ahead for machine learning development.