We’re All a Target: Generative AI and the Automation of Spear Phishing

By Jim Downey, Senior Product Marketing Manager, F5.

  • Friday, 15th March 2024 Posted 1 month ago in by Phil Alsop

Not long ago, we could pick out phishing emails by their bad spelling, grammatical errors, and non-English syntax. We could spot widely used, generic ploys like the Nigerian prince scam. Most of us have not faced well-polished, targeted spear phishing because researching our background and crafting personalised messages has been too costly for criminals. With generative AI, that’s rapidly changing, and as security professionals, we need to prepare for the consequences.

Generative AI enables end-to-end automation of spear phishing, lowering its cost and broadening its use. Think of the work that an attacker must go through to craft an effective spear phishing message for a business email compromise (BEC). The attacker picks a target, researches their social media, discovers their closest connections, and picks out the target’s interests. With this information, the attacker crafts a personalised email in a tone of voice intended to avoid suspicion. This requires a thoughtful following of leads and psychological intuition.

Could this work be automated? Certainly. Attackers automate the scraping of social media content and use credential stuffing to take over accounts for information gathering. Similarly, through automation, attackers can build a knowledge graph about the life of a target.

With this knowledge graph, attackers can feed highly personal information into a ChatGPT-like service–one without ethical safeguards–to create targeted and effective spear phishing messages. The attacker could create entire sequences of messages that span multiple channels from email to social media with messages originating from multiple fake accounts, each with a well-crafted persona generated based on the target’s trust propensities.

There are signs that this threat is imminent. Reports of new attack tools for sale on the dark web, including WormGPT and FraudGPT, indicate criminals have begun to adapt generative AI to nefarious purposes, including phishing. While the use of this technology has not yet reached large scale end-to-end automation, the pieces are coming together, and the economic dynamics of cybercrime make the development nearly inevitable. Within the economy of cybercrime, there is a specialisation that drives innovation. The World Economic Forum (WEF) estimates that cybercrime is now the world’s third-largest economy, coming in behind the United States and China, with costs expected to reach $8 trillion in 2023 and $10.5 trillion in 2025. The cybercrime economy includes vendors with specialisations: there are vendors who sell stolen credentials, vendors who provide access to compromised accounts, and vendors offering IP address proxying over tens of millions of residential IP addresses. Moreover, there are phishing-as-a-service providers offering complete toolkits from email templates to real-time phishing proxy sites. As vendors compete to win the business of criminals, the highest prizes will go to those organisations providing an end-to-end service at the lowest cost —a dynamic likely to drive forward the automation of spear phishing. We can imagine organisations that specialise in various types of data gathering around targets, data aggregation, and LLMs focused on specific industries or that excel at distinct types of fraud.

Given the likelihood of increases in spear phishing to new targets, organisations need to bolster their existing anti-phishing practices:

· Uplevel phishing awareness training: It has long been important to regularly educate employees about the dangers of phishing, how to recognise suspicious emails, and what steps to take if they encounter a potential phishing attempt. However, many organisations train employees to recognise phishing emails by their spelling and grammar mistakes. Instead, training is going to have to go deeper to train people to look out for any request from a non-trusted, non-verified source. In conducting simulated phishing campaigns to test employees’ ability to identify phishing emails, use phishing messages that are well-written, professional, targeted at specific employees, and originating from sources that appear legitimate.

· Defend against real-time phishing proxies: Attackers often use phishing to bypass multi-factor authentication (MFA) via real-time phishing proxies. The criminals use phishing to fool users into entering their credentials and one-time password into a site that they control, which they then proxy to the real application to gain access.

· Defend more rigorously against account takeovers: Criminals gain control of massive numbers of accounts through credential stuffing using bots. In addition to financial fraud, criminals gather additional personal data through scraping that they can use in further phishing attacks. Defending effectively against bots requires rich signal collection and machine learning.

· Use AI to battle AI: With criminals exploiting generative AI to commit fraud, organisations should leverage AI in their defence. F5 partners with organisations to take advantage of rich signal collection and AI to battle fraud. F5 Distributed Cloud Account Protection monitors transactions in real time from across the user journey to detect malicious activity and deliver accurate fraud detection rates. If you can detect fraud within applications, it reduces the harm of phishing. Inspecting traffic with AI requires decrypting traffic efficiently, which you can accomplish with TLS orchestration.

What’s next?

Generative AI clearly poses a new set of security challenges. With the onset of automated spear phishing, we need to unlearn many of our heuristics of trust. While in the past we may have trusted based on the appearances of professionalism, we now need more rigorous protocols for determining the veracity of communications. We need to become more suspicious in this new age of misinformation campaigns, deepfakes, and automated spear phishing, and organisations will need to deploy AI in defence at least as rigorously as criminals use it against us.

By David Trossell, CEO and CTO of Bridgeworks.
By Erik Scoralick, Senior Manager, Sales Engineering at Forcepoint.
By Frank Baalbergen, Chief Information Security Officer, Mendix.

Going cloud optional is the only option

Posted 2 days ago by Phil Alsop
By Max Alexander, co-founder at Ditto.
By Gregg Ostrowski, CTO Advisor, Cisco Observability.
By Richard Eglon. CMO Nebula Global Services and Joanne Ballard, MD Mundus Consulting.

Why adding AI should be the new priority for MSSPs

Posted 4 days ago by Phil Alsop
By Innes Muir, Regional Manager, MSSPs, UK, EIRE and RoW, Logpoint.