AI-generated phishing emails, including those created by ChatGPT, present a potential new threat to security professionals, Hoxhunt says.

Amid all the buzz around ChatGPT and other AI applications, cybercriminals have already started using AI to generate phishing emails. For now, human cybercriminals are even more adept at designing successful phishing attacks, but the gap is closing, says a security trainer The new Hoxhunt report released on Wednesday.
Phishing campaigns created by ChatGPT against humans
Hoxhunt compared phishing campaigns generated by ChatGPT to those created by humans to determine which was more likely to trick an unsuspecting victim.
To conduct this experiment, the company sent 53,127 users in 100 countries phishing simulations designed either by human social engineers or ChatGPT. Users were getting the phishing simulation in their inbox as they received any type of email. The test has been configured to trigger three possible responses:
- Hit: User successfully reports the phishing simulation as malicious via the Hoxhunt threat report button.
- Miss: The user does not interact with the phishing simulation.
- Failure: The user takes the bait and clicks on the malicious link in the email.
The results of the phishing simulation conducted by Hoxhunt
Ultimately, human-generated phishing emails claimed more victims than those created by ChatGPT. Specifically, the user drop rate for human-generated posts was 4.2%, while the rate for AI-generated ones was 2.9%. This means that human social engineers outperformed ChatGPT by around 69%.
One of the positive findings of the study is that security training can be effective in thwarting phishing attacks. More security-conscious users were much more likely to resist the temptation to engage in phishing emails, whether human-generated or AI-generated. The percentages of people who clicked on a malicious link in a message fell from more than 14% among the least trained users to between 2% and 4% among the most trained.
SEE: Security awareness and training policy (TechRepublic Premium)
Results also vary by country:
- WE: 5.9% of surveyed users were deceived by human-generated emails, while 4.5% were deceived by AI-generated messages.
- Germany: 2.3% were deceived by humans, while 1.9% were deceived by AI.
- Sweden: 6.1% were deceived by humans, including 4.1% deceived by AI.
Current cybersecurity defenses can still cover AI phishing attacks
Although human-created phishing emails were more convincing than AI ones, this result is smooth, especially as ChatGPT and other AI models improve. The test itself was done before the release of ChatGPT 4, which promises to be smarter than its predecessor. Artificial intelligence tools are sure to evolve and pose a greater threat to organizations from cybercriminals who use them for their own malicious purposes.
On the plus side, protecting your organization against phishing emails and other threats requires the same defenses and coordination whether the attacks are created by humans or by AI.
“ChatGPT allows criminals to launch perfectly formulated phishing campaigns at scale, and while this removes a key indicator of a phishing attack – bad grammar – other indicators are easily observable with the trained eye”, said Hoxhunt CEO and co-founder Mika Aalto. “As part of your holistic cybersecurity strategy, be sure to focus on your employees and their messaging behavior, because that’s what our adversaries are doing with their new AI tools.
“Embed security as a shared responsibility across the organization with ongoing training that empowers users to spot suspicious messages and rewards them for reporting threats until detecting human threats becomes a habit.”
Security advice or IT and users
To that end, Aalto offers the following tips.
For IT and Security
- Require two- or multi-factor authentication for all employees accessing sensitive data.
- Empower all employees with the skills and confidence to report a suspicious email; such a process should be transparent.
- Provide security teams with the resources to analyze and address employee threat reports.
For users
- Hover over any link in an email before clicking it. If the link seems out of place or unrelated to the message, report the email as suspicious to IT support or the support team.
- Review the From field to make sure the email address contains a legitimate business domain. If the address points to Gmail, Hotmail, or another free service, the message is likely a phishing email.
- Confirm a suspicious email with the sender before taking action. Use a method other than email to contact the sender about the message.
- Think before you click. Social engineering phishing attacks attempt to create a false sense of urgency, tricking the recipient into clicking a link or interacting with the message as quickly as possible.
- Pay attention to the tone and voice of an email. For now, AI-generated phishing emails are written in a formal and stilted way.
Read next: As a cybersecurity blade, ChatGPT can cut both ways (TechRepublic)