Research of Potential Dangers of Artificial Intelligence

Research of Potential Dangers of Artificial Intelligence

From new stalking methods to fishing campaigns: there are plenty of ways that artificial intelligence can cause harm if it fell into the wrong hands. The researchers’ team decided to rank the potential criminal applications that artificial intelligence will have in the next fifteen years.

By using fake video and audio to impersonate another person, technology may cause various types of harm. Threats range from discrediting public figures to extorting funds by impersonating someone’s relatives or child over a video call and to influence public opinion.

After scientists from UCL (University College London) compiled a list of 20 artificial-enabled crimes based on popular cultures, academic papers, and news, the ranking was established. Few dozen experts discussed the severity of each threat at a two-day seminar.

They asked participants to rank the list in order of concern, based on four criteria: how easily the crime could be carried out, how difficult it is to stop, the potential for criminal gain or profit, and the harm it could cause.

Deepfakes may, in principle, sound less worrying than killer robots. Nevertheless, this technology is capable of causing a lot of harm very quickly. Moreover, it is hard to detect and stop. Relative to other artificial intelligence-enabled tools, the experts established that deepfakes are the most severe threat out there.

There are examples of face content that undermine democracy in some countries: we can take an example of the United States. Last year a doctored video of House Speaker Nancy Pelosi in which she appeared drunk picked up more than 2.5 million views on Facebook.

 

Artificial Intelligence

 

United Kingdom organization Future Advocacy similarly used artificial intelligence to create a fake video amid the 2019 general election. The video showed Jeremy Corbyn and Boris Johnson endorsing each other for prime minister. The video was not malicious. Nevertheless, it stressed the potential of deepfakes to impact national politics.

The University College London researchers stated that deepkfaces will only get harder to detect as they get more credible and sophisticated. Some algorithms are already successfully identifying deepfakes online. Meanwhile, there are many uncontrolled routes for modified material to spread. This will lead to widespread distrust of visual and audio content. That is what researchers are warning.

Furthermore, driverless vehicles were identified as a realistic delivery mechanism for explosives, with autonomous cars just around the corner. Moreover, they can be weapons of terror in their own right. Equally, achievable is the use of artificial intelligence to author fake news: we should not under-estimate the societal impact of propaganda, moreso that this technology already exists. That is what the report says.

These applications will be so pervasive that defeating them will be near-impossible. That is the case of artificial intelligence-infused phishing attacks perpetrating media with crafty messages. These messages will be impossible to distinguish from reality. Another example can be large-scale blackmail. In these cases artificial intelligence’s poses a threat of harvesting large personal datasets and information from social media.

Moreover, participants pointed to the multiplication of artificial systems used for critical applications like financial transactions and public safety.