Artificial intelligence (AI) is transforming the way we use technology – for good and bad. In cybersecurity, it can help network defenders, but it’s also giving a leg up to the threat actors. Always among the first to adopt new technologies, they are weaponizing generative AI (GenAI) tools in ever greater numbers.
That means more phishing emails which look flawless, fake faces or voices asking victims to transfer money, and new low-quality malware generated by tech novices.
The UK’s National Cyber Security Centre (NCSC) warns that, over the next two years, AI “will almost certainly continue to make elements of cyber-intrusion operations more effective and efficient, leading to an increase in frequency and intensity of cyber threats.”
- ESET has been named a Leader in the IDC MarketScape for Consumer Digital Life Protection (CDLP) and SOHO segments.
- We believe that powerful yet lightweight technologies, paired with a comprehensive AI strategy, are one of the top strengths of ESET.
- According to IDC MarketScape: “ESET is a suitable choice for individuals, families, and SOHO users seeking reliable and lightweight digital life protection. Its prevention-first approach is ideal for those who value proactive security measures that do not compromise device performance”.
To make sure you and your organization don’t end up being a victim, take time to understand how these new AI-driven attack vectors work.
From old tricks to AI-powered threats
Large language model (LLM)-powered GenAI tools offer potentially significant productivity enhancements in roles as diverse as customer service to coding. They’re also reshaping the way we search for things online. Gartner predicts search engine volume will drop by a quarter by next year, thanks to the impact of chatbots. But as transformative as they’ve been for legitimate work, they’re also empowering malicious actors.
AI brings speed, scale and accuracy to the cybercrime underground. Previously, phishing campaigns relied on generic messages and scams often as unconvincing as the “Nigerian prince” (419) advance-fee fraud. These messages had to be built from scratch and often contained spelling and grammatical errors that made them easy to spot. Video and audio content was near-impossible to fake convincingly.
Thanks to AI, all that has changed. Now, threat actors can benefit from technology which enables:
- Faster, large-scale cyber-attacks which can be launched in just a few clicks
- More convincing content, including perfect native-speaker language, spelling and grammar, as well as fake video/audio
- Personalized and targeted content, created by scraping publicly available data sources on a victim
Let’s look at those threats in more detail:
1. Deepfake impersonation: when both voices and faces lie
GenAI-powered deepfakes are changing the rules about what content we can and can’t trust. As the technology rapidly improves, it has reduced in price, to the point where virtually anyone could launch:
- Attacks on employees which mimic the voice or video of a senior executive, urging them to make a big-money fund transfer/payment. One finance worker at a multinational was tricked into making a $25.6m payment after a conference call with a deepfake-generated CFO and other colleagues
- Attacks on friends and family that might trick them into paying a ransom for a non-existent kidnapping, or wire money to cover for medical emergency of their loved ones
- Extortion attempts enabled by deepfake-generated images/videos of the victim
- More sophisticated attempts to bypass HR filters to gain remote employment with an organization (as North Korea has been doing for some years)
We all think we can spot synthetic content online, but the truth is somewhat different. In fact, research shows that it can fool most of us. And attacks are increasing. Gartner claims 62% of organizations have experienced a Deepfake attempt in the past 12 months.
2. Phishing emails that look perfect
GenAI has made it much easier to craft convincing, highly personalized emails targeting you and your colleagues. With access to compromised accounts and publicly available information, tools can be taught to suggest the perfect moment to “hijack” conversations and insert malicious messages into real email threads.
In the first five months of 2025, a third (32%) of phishing emails contained a high volume of text – indicating use of LLMs. Without the ability to spot fake emails by their poor grammar or typos, you need better cyber defenses and updated security awareness training.
3. Smishing and chatbot-driven scams
SMS phishing campaigns are also increasingly being powered by GenAI, for similar reasons. It can create hyper-personalized, flawlessly written English-language messages designed to trick you into clicking. And it can do this at scale, even adapting dynamically to improve click-throughs.
How many of us have seen a text from a delivery company requesting we open and click to confirm our details? GenAI will do its best to mimic those legitimate messages.
4. Automated recon and adaptive malware
AI isn’t just good at imitating human writing style to bypass phishing filters, it can make the bad guys’ job easier by automating other tasks that once took time, resource and skill. This includes scanning internet-facing systems for open ports and exploitable vulnerabilities. The NCSC warns: “The most significant AI cyber development will highly likely come from AI-assisted vulnerability research and exploit development (VRED).”
Taken together, these trends are bad news for you and your employer. When it becomes cheaper and easier to launch what once were thought of as sophisticated attacks, everyone’s at risk. AI could one day even help cybercriminals to build ransomware.
5. Prompt injection and model abuse
How do the attacks listed above actually work? There are two main ways capabilities are made available on the cybercrime underground. The first is via malicious tools/services (e.g. “WormGPT, “FraudGPT”) which are built using open source LLMs.
Alternatively, threat actors sell “jailbreak-as-a-service” offerings which target legitimate chatbots like ChatGPT via prompt injection attacks. These packages typically offer the threat actor an anonymous connection to the chatbot, and a carefully crafted prompt designed to remove the guardrails preventing it from sharing dangerous content.
A more complex attack works via indirect prompt injection, where the attacker embeds a malicious prompt into an external facing file or public web page triggering “unsafe” output once the chatbot is asked to summarize the content. Such attacks will only increase in volume as more companies use AI in customer-facing scenarios.
Who’s at risk?
No sector or individual is safe from AI-powered attacks. Any organization that holds sensitive data to steal and hold to ransom, or that has employees to trick into wiring funds, could be a victim. Finance, government, healthcare, utilities to name but a few.
Internet users like you are also in the crosshairs, via large-scale phishing/smishing attacks, and more targeted deepfake scams. If you have an online presence, you’re a potential target.
How to stay safe today
Fortunately, GenAI risks can be combatted. Consider the following:
For individuals:
- Verify the authenticity of urgent money transfer requests via a second channel
- Slow down. Social engineering including phishing, attempts to create a sense of urgency in order to impair your judgement
- Avoid clicking on any links in unsolicited emails or texts
- Don’t overshare information on social media
- Revisit your social account settings and switch to private mode to prevent AI scraping
- Keep software and operating systems (on all devices and machines) updated
- Use strong, unique passwords (stored in a password manager) and multifactor authentication (MFA) on all sensitive accounts. Or opt for a passkey if offered
- Use multi-layered security apps from trusted vendors designed to block phishing forms and websites that GenAI threats often lead to, as well as novel malware
- Look out for digital flaws in videos indicative of deepfakes (i.e. poor lip syncing)
For organizations:
- Update your policies for financial approvals/money transfers
- Update staff training to incorporate recognition of deepfakes, GenAI phishing/smishing attempts
- Deploy continuous monitoring to check for suspicious behavior
- Invest in tools to spot AI-generated text and deepfakes
- Deploy phishing-resistant MFA
- Implement a Zero Trust security approach which means all users and devices are continuously authenticated and monitored and given only the minimum privileges necessary
The arms race ahead
Cybersecurity has always been an arms race between attackers and defenders. Now AI is set to supercharge that dynamic, with threat actors racing to find ways to circumvent detection tools. They will continue to produce “as-a-service” offerings on the criminal underground that do most of the heavy lifting, enabling more threat actors to launch attacks. According to the NCSC, AI will increasingly help with “victim reconnaissance, vulnerability research and exploit development, access to systems through social engineering, basic malware generation and processing exfiltrated data.” And AI will also create a new attack surface of its own for adversaries to target.
However, as AI becomes more commonplace, it will also be built into email clients, browsers and messaging apps to root out scams, malware and fake content. In the meantime, governments are also working to criminalize certain deepfake activities and force greater transparency about artificially generated content.
Expert insights
“We expect use of AI for generating malware to remain limited and specific for the near future, but it will make social engineering attacks much more convincing. The biggest risk for everyday users will be the rise of high-quality, AI-generated scams – like realistic fake emails, messages, ads, or even videos (deepfakes) – that can trick people more easily than ever before. Scammers are focusing on making their attacks look trustworthy, using AI to create professional-looking presentations and interactions. This means users need to be extra careful with unexpected messages and offers, as social engineering is becoming one of the main ways criminals target people online.
In the coming years, AI-powered bots will also become more widespread and their interactions more realistic and sophisticated. These bots will be able to write convincing fake reviews, pretend to be real people, and create believable social media profiles. They might try to trick people in many ways, from spreading misinformation to running scams or influencing opinions, especially around important events like elections. As these bots get better at imitating real users, it will become harder to spot fake accounts and false information. Regular users should be cautious when reading online reviews, interacting on social media, or responding to messages, as it’s getting more difficult to tell what’s genuine and what’s not.“
- Juraj Jánošík, ESET Head of AI
Ready to see AI-powered cybersecurity in action?
Let AI do its best for you - securely. Take the next step with AI-powered threat detection in ESET HOME Security. Protect your devices and data from evolving cyber risks, including scams and attacks driven by AI. At home? Start your free ESET HOME Security trial today. Running a business? ESET Small Business Security keeps your work safe.
The bottom line
AI has dramatically shifted how cybercriminals operate. From flawless phishing to cloned voices, attacks are more convincing than ever. But with awareness, verification, and updated tools, you can reduce the risk of compromise.
Frequently asked questions
What are the primary malicious uses of AI today?
Deepfake impersonation, phishing enhanced by GenAI, and advances in social engineering. AI helps threat actors accelerate large-scale campaigns in multiple languages, as well as personalizing attacks targeted at smaller groups. It can also improve victim reconnaissance and vulnerability exploitation.
What are deepfake attacks typically used for?
To create believable videos and audio tracks used for online scams and phone scams, to trick employees into wiring funds abroad, extorting money from families by pretending a loved one has been kidnapped, and by “sextorting” individuals with AI-generated adult content.
How do I spot a deepfake?
Urgency, slight glitches in video and audio, and a refusal by the ‘sender’ to confirm via other channels are all red flags.
How do threat actors typically access AI tools for attacks?
They might purchase access to a service built on a legitimate open-source LLMs, or a “jailbreak-as-a-service” offering, which shares prompts known to circumvent the guardrails built into publicly available chatbots.
Can security tools still help?
Yes. Modern solutions use AI too. These can help spot deepfakes and AI-enhanced phishing emails. However, use them as part of a layered approach to security including awareness training.
Are organizations in some sectors more at risk than others?
Every type of organization is a potential target.
How can organizations improve their defenses against AI attacks?
Update staff training and finance department policies. Deploy phishing-resistant tools and continuous monitoring as part of a Zero Trust approach to cybersecurity. It is advised to also utilize MDR combining AI-powered threat hunting with human-led security service. ESET is a leader in this segment.
How can I stay safe from AI-powered attacks?
Take your time when reading phishing, smishing messages, keep software updated, and use strong, unique passwords and MFA. Add multi-layered security tools, set social media settings to private and educate yourself and your family about AI threats.









