It was something we’d already been speculating on and expecting to happen soon: criminals using AI to generate nearly undetectable phishing scam emails. Unfortunately, it turns out AI generated phishing is already a reality. A recent study that analyzed volumes of phishing messages from 2023 revealed many had very likely been generated by AI. Now the question is: What do we do about this new threat?
A sobering analysis
The report by Abnormal Security confirms that what we thought was a future threat is already here. They gathered messages that their clients had reported as suspected phishing throughout last year and analyzed them for signs of AI. Using CheckGPT, a free online tool that looks for signs of AI, they discovered that thousands of messages were likely AI generated phishing scams. The tool works by analyzing the likelihood that each word in the message has been generated by an AI model, based on the context that precedes it. During their testing, if this likelihood came back consistently high, they considered it a strong indicator that the text was possibly generated by AI and not a human.
Criminals adopt AI
Bad actors are leveraging AI across a variety of email attack types, including credential phishing, malware delivery, and payment fraud. They have clearly embraced the malicious use of AI. And while AI developers like OpenAI have placed limits on what their chat bots can produce, cybercriminals have responded in kind by creating their own malicious forms of generative AI, including WormGPT and FraudGPT.
Why we need to worry
There are a handful of reasons why the new trend of bad actors utilizing AI is so problematic:
- AI can already generate images that replicate existing websites and logos, and it’s improving all the time.
- These messages are more convincing because the text generated by AI lacks typos and grammatical errors, two of the things we are taught to look for in suspicious messages,
- Previously, many cybercriminals relied on pre-existing templates to set up their phishing messages. This made indicators of compromise easier for traditional security software to detect, as a large percentage of attacks used the same domain name, or the same malicious link found in these templates. Generative AI, on the other hand, helps criminals craft unique content in milliseconds, making it infinitely more difficult to recognize it as phishing.
The bottom line: With new tools at their disposal, criminals can now set up AI generated phishing scams with more sophistication and at greater volumes than ever before.
What we can do
According to the report from Abnormal Security, 91% of security professionals reported experiencing AI-enabled cyberattacks in the past six months. Criminals have clearly already embraced the malicious use of AI. This is why organizations need to remain vigilant when it comes to cybersecurity training for their staff. But that is no longer enough. They need to turn to AI-based cybersecurity solutions that utilize machine learning and LLM’s to stop these AI generated phishing attacks before they can even reach employee mailboxes.
0 Comments