AI phishing and social engineering are up 135%

AI phishing and social engineering are up 135%

The risk of people falling victim to malicious emails and “novel social engineering attacks” appear to be rising. The British-American cyber defense company Darktrace made the revelation in a blog post on April 2. The company contrasted the rise with the increased adoption of generative AI like ChatGPT.

According to Max Heinemeyer, chief product officer at Darktrace, there was a 135% increase in ‘novel social engineering attacks.’ Across thousands of active Darktrace/Email customers from January to February 2023, Darktrace found widespread adoption of ChatGPT.

In this instance, Heinemeyer argues that a novel social engineering phishing email differs significantly from the typical phishing attempt.

The pattern shows that generative AI, like ChatGPT, gives threat actors a way to quickly and efficiently create complex, targeted attacks.

It also makes jobs tougher for humans, especially if they’ve grown to distrust the emails they get, as Heinemeyer characterized one big issue that generative AI is causing. There are now diminishing returns on security training, given the emergence of increasingly sophisticated-looking and normal-seeming emails.

Heinemeyer predicts that the erosion of trust in digital interactions will continue or worsen. Such an outcome may appear if generative AI deliberately deceives others through exploitation. 

Using AI to understand employee behavior and patterns better, helping them ward off threats

Darktrace anticipates that future AI will assist email users in avoiding dangers by considering their routines and demands. Developers could utilize AI to come closer to employee behavior and patterns. User interactions with their email inboxes could be a great data source.

According to Heinemeyer, the future rests in a collaboration between artificial intelligence and humans. It is dangerous when the algorithms judge whether the communication is harmful or benign. Shifting the burden of responsibility of the person to a machine could be harmful.

While the stance taken is one of offloading the responsibility for security by utilizing AI to combat email security threats, it also envisions a world where even more AI – an intrusive one at that – is required to combat malicious AI and human acts that want to do the very same thing: to break into an individual’s business emails.

Although it isn’t a perfect answer, it makes for an easier but more expensive proposition for corporations and cybersecurity organizations.