Categories: Uncategorized

AI phishing and social engineering are up 135%

The risk of people falling victim to malicious emails and “novel social engineering attacks” appear to be rising. The British-American cyber defense company Darktrace made the revelation in a blog post on April 2. The company contrasted the rise with the increased adoption of generative AI like ChatGPT.

According to Max Heinemeyer, chief product officer at Darktrace, there was a 135% increase in ‘novel social engineering attacks.’ Across thousands of active Darktrace/Email customers from January to February 2023, Darktrace found widespread adoption of ChatGPT.

In this instance, Heinemeyer argues that a novel social engineering phishing email differs significantly from the typical phishing attempt.

The pattern shows that generative AI, like ChatGPT, gives threat actors a way to quickly and efficiently create complex, targeted attacks.

It also makes jobs tougher for humans, especially if they’ve grown to distrust the emails they get, as Heinemeyer characterized one big issue that generative AI is causing. There are now diminishing returns on security training, given the emergence of increasingly sophisticated-looking and normal-seeming emails.

Related Post

Heinemeyer predicts that the erosion of trust in digital interactions will continue or worsen. Such an outcome may appear if generative AI deliberately deceives others through exploitation. 

Using AI to understand employee behavior and patterns better, helping them ward off threats

Darktrace anticipates that future AI will assist email users in avoiding dangers by considering their routines and demands. Developers could utilize AI to come closer to employee behavior and patterns. User interactions with their email inboxes could be a great data source.

According to Heinemeyer, the future rests in a collaboration between artificial intelligence and humans. It is dangerous when the algorithms judge whether the communication is harmful or benign. Shifting the burden of responsibility of the person to a machine could be harmful.

While the stance taken is one of offloading the responsibility for security by utilizing AI to combat email security threats, it also envisions a world where even more AI – an intrusive one at that – is required to combat malicious AI and human acts that want to do the very same thing: to break into an individual’s business emails.

Although it isn’t a perfect answer, it makes for an easier but more expensive proposition for corporations and cybersecurity organizations.

Recent Posts

US Economy Growth Slows to 1.6% in First Quarter

Key Points: US economy growth slowed to 1.6% in Q1, below the expected 2.4%. Consumer spending growth tapered, but business…

2 days ago

Microsoft Revenue Hits $61.9B, Up 17% Year-Over-Year

Key Points: Microsoft's რevenue surged to $61.9 billion, a 17% increase driven by robust sales in all business segments. Notable…

2 days ago

Ethereum Stabilizes Below $3,180 Amid Market Caution

Key Points Ethereum is Trading below $3,180, under the 100-hourly SMA, indicating a cautious market trend despite the formation of…

2 days ago

Oil Prices Up: Brent Gains 2%, WTI Increases 0.5%

Key Points Oil Prices rose, Brent crude oil reached $89.32 per barrel, up 2%, and WTI at $83.86, up 0.5%.…

2 days ago

GBP/USD Drops to 1.2502 Amid Economic Turmoil

Key Points GBP/USD is currently at 1.2502, impacted by UK-US economic turbulence and monetary policies. US Q1 GDP growth at…

2 days ago

USD/INR Emerges as Steadiest Major Currency

Key Points: Despite global volatility, USD/INR is the least volatile major currency in FY 2023-24, supported by interbank USD sales…

2 days ago

This website uses cookies.