When your email provider blocks a suspicious message. When your browser warns you about a risky site. Or when your phone flags an unsafe download. That could all be the work of AI.
Why do we need this kind of innovation? Because those who wish to do us harm are also harnessing the power of AI. They’re ramping up both the volume and sophistication of these threats to part us with our money and personal information.
According to a recent study, the vast majority of British and American organizations plan to invest in GenAI-powered threat detection and defense (96%) and deepfake detection and validation tools (94%). But there’s also tremendous value to be gained by everyday internet users like you and your family. Understanding what AI can and can’t do helps you make smarter choices about your online safety.
- ESET has been named a Leader in the IDC MarketScape for Consumer Digital Life Protection (CDLP) and SOHO segments.
- We believe that powerful yet lightweight technologies paired with a comprehensive AI strategy are one of the top ESET strengths.
- According to IDC MarketScape: “ESET is a suitable choice for individuals, families, and SOHO users seeking reliable and lightweight digital life protection. Its prevention-first approach is ideal for those who value proactive security measures that do not compromise device performance”.
What AI means for cybersecurity
AI is a term that’s often used but still little understood by many. In a cybersecurity context, the tech has various uses, including:
- Optimizing rule-based systems: These more traditional filters work on an “if X, then Y” model to match known patterns of malicious activity. This means they’re not as effective at stopping novel or never before seen threats.
- Machine Learning (ML): AI technology that analyzes vast volumes of data to understand what “normal” looks like, so it can then detect unusual activity more effectively. This could include phishing emails, malware, and/or suspicious network activity.
- Generative AI (GenAI): It creates original content by analyzing vast datasets and can be used to create text that explains alerts or summarizes reports, potentially via a handy AI assistant.
- GenAI can also be used to detect AI-powered phishing, scams, and deepfakes. It could also be deployed to generate synthetic data with which to train cyber-defense models.
How is AI protecting me right now?
AI is already built into countless products and services that you may already be using. These could include:
1. Your inbox: AI might analyze writing style or web domains to check for anything suspicious, as well as links/attachments, and any behavioral anomalies such as emails sent at an unusual time of day.
E.g., an incoming email is flagged as a bank scam because of unusual style.
2. Web browsing: AI blocks risky sites and downloads via real-time analysis of content, URLs and more. This is more effective than relying on static allow/deny-lists.
E.g., you’re blocked from downloading a game mod because it is flagged as containing malware.
3. Device monitoring: AI might use ML techniques to learn what your normal device usage looks like, so it can flag any strange processes, connections and usage. It also powers biometric authentication such as facial recognition to streamline access to apps and mobile wallets.
E.g., on-device AI flags a deepfake video you’ve clicked on social media, or a real-time video call, as false.
4. Clearer alerts: GenAI can help explain any security issues you might have in plain language — enabling you to take action swiftly if necessary to fix them, or protect yourself better in the future.
E.g., you don’t just get an alert saying “abnormal login blocked.” It would also include context and further actions like “We think someone was trying to use your logins to hijack your account. We advise you change your passwords immediately.”
5. Family safety: Smarter parental controls such as screentime monitoring and app/content filtering, as well as real-time behavioral monitoring to detect distress, bullying, grooming, and other red flags.
E.g., you get an alert that your child has been searching online for topics related to self-harm.
How criminals use AI
Unfortunately, AI isn’t only helping to keep you safer. It’s also empowering malicious actors to:
- Build entire phishing campaigns in multiple languages, error free. GenAI can also be fed with background details on specific individuals for more targeted phishing efforts
- Launch deepfake attacks such as videos of celebrities that might make you believe in a scam promotion, or phone calls from gangs claiming to have ‘kidnapped’ your children
- Use deepfake pictures and videos of their targets to bypass some authentication methods based on facial recognition
- Automate and accelerate attacks by scanning the internet for vulnerable and exposed devices or machines
- Create fake and scam content to distribute on social media. Highly convincing scams can be designed with little effort or cost, enabling even cybercriminals with poor tech knowledge to participate
- Use AI to write entirely new malware
The benefits and limitations of AI-powered cybersecurity
In this context, AI can help you in many ways. It’s fast, quite accurate, and works 24/7/365 to spot malicious behavior, block malware and threats before they have a chance to impact your digital life. It can also help you understand more about the risks you face online so you can make yourself safer. But it’s no panacea.
The challenges of using AI-powered security include:
- False positives - i.e. when the AI flags something as a threat when it isn’t. This can mean you miss vital emails as they end up in the spam folder.
- False negatives - i.e. the AI doesn’t catch a threat, allowing it to slip in under the radar.
- Over-reliance - false negatives are particularly dangerous if you come to rely too much on your built-in AI defenses. Always remember it’s not 100% accurate.
- Privacy concerns –some tools and services may require access to personal data in order to train the underlying models effectively.
How to stay safe with AI protection
A lot of the products you’re using will have AI-powered protection switched on by default. Optimize your security by:
- Keeping automatic updates on to ensure you’re always on the latest, most secure versions of any browser/email client etc.
- Using unique, strong passwords stored in a password manager and enhanced with multifactor authentication (MFA) to tackle phishing attempts
- Always taking a pause before clicking on an unsolicited message, social media ad or similar. Phishing attacks try to rush you into making bad decisions
- Limiting information sharing with AI chat tools. That way you’ll reduce the risk of accidentally leaking personal information to a chatbot which may reshare it with others
- Checking any security alerts carefully and taking recommended actions to correct
- Educating yourself and family members about the risks posed by AI scams, threats, and disinformation, as well as the limits of AI-powered security
A simple routine for everyday safety
Consider building the following routine to make the most of your AI security tools:
Weekly: Perform a quick device scan to check for any malware or threats.
Monthly: Perform a full scan of all devices/machine, review what apps you’ve downloaded, and delete any unused accounts.
Ongoing: Trust your tools to block risky links or downloads that you might encounter online, and to flag potential deepfakes or phishing threats. False positives and negatives are possible, but err on the side of caution.
Family: Review your AI-generated security reports together to make sure everyone understands what good security posture looks like, what threats are out there, and what to do in the event the AI doesn’t block them.
The future of your AI-powered security
The good news is that AI-driven security is only going to get better. In time, you can expect more personalized protection that understands your unique behavior, and more accurate detection of deepfakes and malicious, AI-generated content. AI assistants will start to appear in security apps to make it easier to configure and understand them. And there’ll be smarter parental controls in the devices your family uses.
Governments and regulators will also step up efforts to increased scrutiny of AI tools and their impact on privacy and safety. That’s because cybercriminals will continue to use the technology for malign ends.
Expert insights
„Recently, Large Language Models like Open AI’s ChatGPT popularized AI algorithms, but the truth is these models are based on technology that has been here for a while.
In fact, roots of ESET AI-related security reach back to 1997 with introduction of neural networks. Later, in 2005, ESET deployed its breakthrough DNA Detections technology followed by ESET’s LiveGrid® cloud reputation system in 2010, ESET Advanced Machine Learning module in 2017, and transformer-based detections in 2018. The latest came years before transformers became widely known as the driving force behind generative AI tools like ChatGPT.
AI models in cybersecurity are often invisible, they don’t chat with users, they don’t write emails. They are fine-tuned for one purpose – detecting malware while avoiding false positives. They run in the background, quietly protecting users 24/7 without them even noticing that.
But that’s not all. Almost three decades of experience with developing AI-related security technologies gave ESET unique insights, deep understanding of existing limits, and most of all, a broader perspective. Having just one type of technology is not enough: ESET employs a sophisticated, multi-layered integration of AI models – combined with traditional detection methods and human oversight – to ensure that decisions consider context and user impact, rather than relying solely on automated systems.“
- Juraj Jánošík, Director of Automated Systems and Intelligent Solutions
Ready to see AI-powered cybersecurity in action?
Try ESET's award-winning protection - smart, fast, and built to keep your digital life safe.
Start your ESET HOME Security free trial today for home protection or ESET Small Business Security for small office protection.
The bottom line
AI reduces online risk by filtering scams, blocking malware, and explaining alerts. But it isn’t perfect. Combine AI-powered protection with smart habits - strong passwords, updates, caution - for the best results.
Frequently asked questions
Q: Is AI spying on me?
A: No, it only spots suspicious patterns, not private messages. Though, some technologies can see what you see on your screen if allowed.
Q: Can AI stop every cyberattack?
A: No. It reduces risk but can’t guarantee full safety. Updates improve accuracy.
Q: How do threat actors use AI maliciously?
A: In developing high quality phishing campaigns, deepfake attacks, scam content and to automate reconnaissance and vulnerability exploitation.
Q: Should I trust AI chatbots with personal data?
A: Only if you know it’s private. It’s safer to avoid sharing sensitive info.
Q: What are the main uses of AI for cybersecurity?
A: Filtering for malicious emails, blocking malicious websites and downloads, enhancing device security, and supporting facial recognition. GenAI can also enhance security explainability and deliver smarter parental controls.
Q: What are the main tools/services AI is built into for protection?
A: Email inboxes, web browsers, devices, and standalone cybersecurity platforms.
Q: What are the main challenges related to AI for cybersecurity?
A: False positives and negatives, over-reliance, and privacy concerns. Vigilance is always required.
Q: How can I complement AI-powered cybersecurity?
A: Keep your automatic updates on, use strong unique passwords and MFA, be phishing-aware, check any security alerts, limit info sharing with chatbots, and educate yourself about AI-powered risks.







