In 2025, generative artificial intelligence moved from experimental technology to an operational weapon, giving cybercriminals the ability to launch highly personalized, automated attacks at unprecedented speed and scale. Incidents are becoming harder to detect, harder to attribute, and increasingly difficult to distinguish from legitimate activity. The National Cyber Security Centre(1) warned that AI will almost certainly continue making cyber intrusion operations more effective and efficient worldwide, from automating vulnerability discovery to accelerating exploit development.
Now, as organizations and regulators move into 2026, the conversation is shifting from whether AI will impact cybersecurity to how fast defenses can adapt. Security teams are no longer just defending against AI-enabled threats, they are deploying AI themselves to strengthen detection, improve intelligence, and respond at machine speed. While the core fundamentals of cybersecurity remain unchanged, the way teams deliver on those fundamentals is rapidly transforming to keep pace with a new generation of intelligent, adaptive threats.
The AI “Arms Race” Becomes Official
With AI becoming the central force driving the evolution of both cyber threats and defenses, industry leaders are describing cybersecurity as an AI “arms race.” For the first time, both sides of virtually every incident have some level of AI involved: Attackers using AI bots to craft emails and defenders using AI filters to flag it, or AI-driven malware fighting AI-driven detection. The result is a continuous game of cat-and-mouse.
Organizations are heavily shifting security and R&D budgets to AI solutions, with the analyst firm IDC predicting that 75%(2) of security architectures will integrate AI or machine learning (ML) in some capacity by 2027. Even organizations that have used AI or ML are urgently evaluating AI vendors to mitigate risk and stay one step ahead. Dozens of startups and established vendors within the cybersecurity industry are offering AI-driven security products that showcase innovation, including:
- AI-enhanced email security that claimed to detect AI phishing
- Network monitoring tools with built-in ML analytics
- User authentication systems capable of spotting deepfake voices
This shift is also reflected in security operation roles and responsibilities. Job postings increasingly include AI or ML tool experience as a desired skill, highlighting the need for expertise in managing and interpreting AI-driven systems.
Surge in AI-Era Emerging Threats and Attacks
Law enforcement and agencies are aware of the growing threat landscape. The FBI issued official warnings(3) that criminals are “leveraging AI” to dramatically increase the sophistication of phishing, fraud schemes, and BEC (Business Email Compromise). Phishing, in particular, has seen record levels in the past year, overtaking ransomware as the threat discussed most within the industry.
The emergence of AI-driven crime tools are also becoming rapidly commercialized, with knock-off ChatGPT models like “WormGPT” and “FraudGPT” which have no ethical restriction or oversight. These “dark LLMs” offer subscription models that enable on-demand help for writing phishing content, creating fake documents, or refining malware code. This level of accessibility allows criminals with less experience or skill to launch more advanced attacks by outsourcing the “thinking” to AI-as-a-service.
Similarly, deepfake scams are becoming more prominent in public communications, going from a once niche curiosity to a recognized threat. An industry of deepfake-as-a-service can be found on criminal forums where scammers hire specialists to create fake videos or real-time impersonations. Nearly anyone willing to pay, or able to access a leaked model, can leverage top-tier AI for malicious activity.
AI Regulatory and Ethical Scrutiny
In response to the growing risk of AI used by cybercriminals, governments are working to outlaw malicious deepfakes and require disclosures. Several jurisdictions introduced or passed laws(4) making it a crime to create AI deepfakes for the purpose of fraud or electoral interference.
Beyond criminal attacks, organizations are recognizing a need for AI governance to ensure their use of AI in security doesn’t introduce new, unexpected risks. AI governance best practices include:
- Govern AI like any high-risk third party: Eliminate Shadow AI gaps; inventory AI tools, apply access controls, and assess risk before enterprise use.
- Train humans on AI fakes: Awareness beats novelty. Regularly expose teams to real AI attack examples and enforce cross-channel verification training for high-risk roles.
- Assume AI, trust Nothing, and verify everything: Make Zero Trust policy default. Validate high-risk requests through known, secondary channels to block AI impersonation.
- Add ML talent where it matters: Hire or upskill 1 AI / ML specialist (e.g., security data scientist) to validate vendors, tune models, and tailor AI defense to your environment.
- Automate to contain, human-validate to decide: Deploy Security Orchestration, Automation, and Response (SOAR) / Endpoint Detection and Response (EDR) / ID-response automation for rapid containment, with clear hand-offs to humans for escalation and judgment.
How to Prepare for an AI-Enhanced Threat Landscape
In 2026, one thing is clear for CISOs and security teams: the old playbook needs an update. The AI-driven offense developments of 2025 demand changes in how organizations approach cybersecurity.
Here are some key takeaways and recommendations for adapting strategy, processes, and tooling in 2026 onwards:
- AI defense is no longer optional: Manual and signature-only security can’t keep up. Prioritize AI-driven detection and automation in your 2026 security stack.
- Watch behavior, not just thresholds: Use ML to baseline normal activity and surface subtle anomalies across identities, logins, and data movement.
Prioritize AI fluency, not expertise: Teams don’t all need PhDs, just working knowledge of ML, training data, false positives / negatives, and adversarial AI risks to tune tools confidently. - Secure the message, not just the link: Modern email / content security must analyze language, sentiment, and media to detect AI-generated deception, not only malicious URLs.
- Add friction for attackers, not users: Implement lightweight identity assurance (PINs, call validation, audio / video screening) as deepfake defense to increase confidence without slowing the business.
- Multiply analysts with AI, don’t just hire more: Use AI to handle tier-1 triage and repetitive tasks, speeding quality response, reducing burnout, and improving morale.
Building Cyber Resilience in a Perpetual AI Threat Cycle
As AI models, exploits, and countermeasures continue developing, it is important to accept that the AI-cybersecurity arms race is a continuous cycle. There is no “finish line” where one side wins. To stay agile and adapt to emerging threats, organizations should adopt a strategy of continuous improvement. This includes:
- Regularly schedule red-team exercises or penetration tests that specifically simulate AI-driven attacks (like deepfake phishing or AI-written malware) to see how your people and systems hold up.
- Participate in information-sharing communities about AI threats. For example, many industry groups, and even vendors, started sharing insights on prompt injection attacks, deepfake indicators, and other AI-related threat intelligence. Tap into these communities to help keep you ahead of the curve.
- Keep an eye on regulatory developments and be ready to comply with new AI requirements. It’s better to build good governance now than be caught off guard by a law that, for example, mandates disclosure of AI-generated content or requires auditing your AI models for bias or security.
Organizations that successfully implement these strategies will have an advantage of resilience through agility against the next wave of attacks. Training and educating teams, improving defensive systems using AI, and optimizing processes to mitigate AI-enhanced attacks are key to navigating the changing threat landscape. Cybersecurity has always been about adapting to change, and AI is just the latest, albeit powerful, change.
Staying Informed in an AI-Driven Threat Landscape
To stay current as AI-driven threats and defensive strategies evolve, subscribe to SDG’s monthly Cyber Threat Advisory.
Sources
(1) https://www.ncsc.gov.uk/report/impact-ai-cyber-threat-now-2027
(2) https://www.idc.com/wp-content/uploads/2025/03/IDC_FutureScape_Worldwide_CIO_Agenda_2024_Predictions_-_2023_Oct.pdf
(3) https://www.fbi.gov/contact-us/field-offices/sanfrancisco/news/fbi-warns-of-increasing-threat-of-cyber-criminals-utilizing-artificial-intelligence
(4) https://www.ncsl.org/technology-and-communication/deceptive-audio-or-visual-media-deepfakes-2024-legislation
(5) https://www.fincen.gov/system/files/shared/FinCEN-Alert-DeepFakes-Alert508FINAL.pdf

