header banner
Default

SlashNext: A 1,265 percent increase in phishing attacks was caused by ChatGPT


In the first months after OpenAI in November 2022 released its ChatGPT chatbot, security researchers warned that the wildly popular generative AI technology could be used by cybercriminals for their nefarious efforts, including phishing and business email compromise (BEC) campaigns.

In January, threat intelligence researchers with cybersecurity firm WithSecure listed phishing emails as one of seven ways threat actors could leverage ChatGPT and similar tools, writing that “experiments demonstrated in our research proved that large language models can be used to craft email threads suitable for spear phishing attacks.”

More recently, an IBM experiment found that ChatGPT can write phishing emails that are almost as convincing as those created by humans, and can write them much faster.

A report released this week put some numbers to the concerns. Cybersecurity company SlashNext found that in the almost 12 months since OpenAI put ChatGPT into the public domain, the number phishing emails jumped 1,265%, with a 967% increase in credential phishing, which is the most common first step in data breaches.

In credential phishing attacks, bad actors use phishing emails or other means to trick people into handing over personal information or usernames and passwords, giving the attackers access into their systems and corporate networks.

The authors of SlashNext’s State of Phishing 2023 report noted that ChatGPT reached more than 100 million users in the first few months after being released, adding that “some of the most common users of large language model (LLM) chatbots are cybercriminals leveraging the tool to help write business email compromise … attacks and systematically launch highly targeted phishing attacks.”

“We cannot ignore statistics like this,” SlashNext CEO Patrick Harr said in a statement. “While there has been some debate about the true influence of generative AI on cybercriminal activity, we know from our research that threat actors are leveraging tools like ChatGPT. … [A]n increase in the volume of these threats of over 1,000% corresponding with the time frame in which ChatGPT was launched is not a coincidence.”

Lowering the Technical Barrier

VIDEO: Email Phishing Attacks Skyrocket by 1,265% Since ChatGPT's Launch, Reveals SlashNext
Olaf Ihle

To create the report, researchers with SlashNext Threat Labs analyzed billions of threats, including linked-based malicious attachments and natural language messages in email, mobile devices, and browsers from Q4 2022 to the third quarter this year.

They also researched cybercriminal behavior and activity on the dark web, including how hackers were using generative AI tools and chatbots. The researchers also surveyed more than 300 cybersecurity pros.

ChatGPT and other generative AI chatbots not only help create more convincing phishing messages more quickly – as IBM found – but also lowered the barrier for bad actors who want to launch such campaigns. Less skilled hackers now have the tools to run much more complex phishing attacks.

“The launch of ChatGPT at the end of the year is not a coincidence in the exponential growth of malicious phishing emails as the use of chatbots and jailbreaks contributed to the increase as more cybercriminals were able to launch sophisticated attacks quickly,” the researchers wrote.

The Need to Bolster Protections

VIDEO: Humans Outsmart ChatGPT in Phishing Attacks
ChainTLDR

Mika Aalto, co-founder and CEO of security awareness training firm Hoxhunt, noted the threat that AI and LLMs like ChatGPT pose to enterprises, adding that organizations need to put improving their defenses against phishing at the top of their security to-do lists. That includes integrating human threat intelligence with a company’s “protect-detect-respond” capabilities and training workers at scale to recognize and report phishing attacks.

“AI lowers the technical barrier to create a convincing profile picture and impeccable text, not to mention code malware,” Aalto told Security Boulevard in an email. “The threat landscape is shifting incredibly fast now with the introduction of AI to the game.”

However, the good news is that AI also can be used by defenders to protect against sophisticated attacks, he said.

Among other key findings in SlashNext’s report are that 68% of all phishing emails are text-based BEC – phishing attacks targeting organizations to steal money or information – and that one average, there are 31,000 threats per day this year.

“People are still the most targeted and vulnerable part of any organization,” the researchers wrote. “The rise in multi-stage attacks between email, mobile, and collaboration tools demonstrates how cyberattacks have grown in sophistication by targeting less protected channels, like mobile. Hackers still find phishing the most effective tool to perpetrate a breach in an organization.”

Recent Articles By Author

Sources


Article information

Author: Michele Thomas

Last Updated: 1700336402

Views: 1139

Rating: 4.2 / 5 (37 voted)

Reviews: 92% of readers found this page helpful

Author information

Name: Michele Thomas

Birthday: 1951-01-30

Address: 20092 Thornton Highway Apt. 317, North Jenniferbury, HI 18997

Phone: +4198596560773332

Job: Veterinarian

Hobby: Gardening, Hiking, Bird Watching, Backpacking, Stamp Collecting, Hiking, Yoga

Introduction: My name is Michele Thomas, I am a proficient, valuable, resolute, Determined, sincere, important, dear person who loves writing and wants to share my knowledge and understanding with you.