Generative AI tools like ChatGPT are sparking an increase in sophisticated email attacks, according to a report released Wednesday by a global, cloud-based email security company.
Security leaders have worried about the possibilities of AI-generated email attacks since ChatGPT was released, and we’re starting to see those fears validated, noted the report from Abnormal Security.
The company reported that it has recently stopped a number of attacks that contain language strongly suspected to be written by AI.
“High-end threat actors have always used artificial intelligence. Generative AI isn’t a big deal for them because they already had access to tools to enable these kinds of attacks,” said Dan Shiebler, Abnormal’s head of machine learning and author of the report.
“What generative AI does is commoditize sophisticated attacks so we will see more of them,” he told TechNewsWorld.
“We have seen an increase in business email compromise (BEC) attacks, which these kinds of technologies make easier to do,” he continued.
“The release of ChatGPT was a consumer milestone, but the release of GPT3 in 2020 enabled threat actors to use AI in email attacks,” he added.
Mika Aalto, co-founder and CEO of Hoxhunt, a provider of enterprise security awareness solutions in Helsinki, told TechNewsWorld that attackers are adopting AI technology to create more convincing BEC campaigns and develop more sophisticated BEC attack kits that are then sold on the dark web.
“According to our own research, human social engineers are still better at crafting phishing emails than large language models, but that gap is closing,” he said. “Hackers are improving at prompt engineering and circumventing guardrails against the misuse of ChatGPT for BEC campaigns.”
“One pretty scary application of this technology is iterative resending of an attack,” noted Shiebler. “
“A system can send an attack, determine if it made it through to the recipients, and if it doesn’t make it through, modify the attack repeatedly,” he explained. “Essentially, it learns how the defense is functioning and modifies the attack to take advantage of that.”
In its report, Abnormal demonstrated how generative AI was used in three attacks on its customers — a credential phishing attack, a traditional BEC attack, and a vendor fraud attack.
These three examples are only a small percentage of the email attacks generated by AI, which Abnormal is now seeing on a near-daily basis, the report noted.
Unfortunately, it continued, as the technology continues to evolve, cybercrime will evolve with it, and both the volume and sophistication of these attacks will continue to increase.
No More Fractured English
Generative AI tools can increase the effectiveness of a phishing campaign, especially those originating outside the United States.
“Many email attacks originate outside the U.S. by non-native speakers, resulting in emails with obvious grammatical issues and unusual tone of voice, which trigger suspicion by the recipient,” explained Dror Liwer, co-founder of Coro, a cloud-based cybersecurity company based in Tel Aviv, Israel.
“Generative AI allows the sender to create a customized, conversational, extremely credible email that would trigger no suspicion, resulting in more users falling into the trap,” he told TechNewsWorld.
“Proper context and grammar make the content more believable and less likely to be suspicious to the user,” added James McQuiggan, a security awareness advocate at KnowBe4, a security awareness training provider in Clearwater, Fla.
“Additionally,” he told TechNewsWorld, “generative AI can pull information from the internet about an organization to create a targeted or more believable spear phishing campaign.”
Joey Stanford, head of global security and privacy at Platform.sh, a global platform as a service provider, noted that email attacks crafted with generative AI might appear more realistic and convincing because they use sophisticated linguistic techniques and large datasets of phishing emails.
“This allows bad actors to automatically generate new, compelling phishing emails that are more difficult to detect,” he told TechNewsWorld. “Generative AI tools like OpenAI’s ChatGPT may be behind the 135% increase in scam emails using these techniques revealed in a recent Darktrace report.”
Fighting AI With AI
Stanford maintained that organizations could protect themselves at the network level against email attacks crafted with generative AI by using cybersecurity tools with self-learning AI. Those tools, he explained, can detect and respond to anomalous and malicious email activity in real time without relying on prior knowledge of past threats.
“These tools can also help organizations to educate their employees on how to spot and report phishing emails and enforce security policies and best practices across the network,” he said.
He acknowledged that those tools were new and undergoing rapid development, but fighting AI with AI appears to be the best solution to the problem for several reasons. Those include:
- Generative AI attacks are dynamic and adaptive and can evade traditional security models that rely on prior knowledge of past threats.
- Self-learning AI tools can detect and respond to anomalous and malicious email activity in real time without human intervention or predefined rules.
- AI tools can also analyze the content and context of emails and texts and flag any suspicious or malicious ones for further investigation or action.
- AI tools can help to educate and empower data science and security teams to collaborate and build a proactive and holistic AI security program.
Beyond AI to Behavior Analytics
However, the generative AI problem can’t be solved in the long term with more AI, countered John Bambenek, principle threat hunter at Netenrich, an IT and digital security operations company in San Jose, Calif.
“What is needed is looking at what is normal and abnormal from a behavior analytics standpoint and to realize that email is insecure and non-securable,” he told TechNewsWorld. “The more something matters, the less it should rely on email.”
“The key is still the same, think twice before taking action on an email, especially if it’s something sensitive like a financial transaction or a request for authentication,” he added.
Whether an email is generated by an AI, bot, or human, the steps for vetting it remain the same, advised McQuiggan. A recipient should ask three questions: Is this email unexpected? Is it from someone I don’t know? Are they asking me to do something unusual or in a hurry?
“If the answer is yes to any of those questions, take the extra time to verify the information in the email,” he said.
“Taking the extra few moments to check the links, the email’s source, and the request can reduce additional costs or resources because someone clicked a link and initiated a risk of data breach to the organization,” he advised.
Read the full article here