AI Arms Race Will Accelerate in 2026, Forcing a Reset for MSSPs

0
AI Arms Race Will Accelerate in 2026, Forcing a Reset for MSSPs
This year has seen the continued escalation in what’s now known as the “AI arms race,” as security teams and MSSPs continue to employ AI-driven technologies to protect organizations and keep pace with the increasingly sophisticated AI-based attacks. Security professionals are warning that this escalating cat-and-mouse game, accelerated by AI, will continue into 2026 and beyond.The growing maturity of such attacks was put on display when Anthropic reported that bad actors used its Claude AI model to automate as much as 90% of the work needed to develop, launch, and run a wide-ranging malware campaign. It also showed the challenges facing security teams in this developing AI-threat world.Chip Witt, principal security evangelist with Radware, is calling 2026 “the year AI takes its gloves off.”Bad actors are using generative and autonomous AI to run attacks such as prompt injection, synthetic identity abuse, automated reconnaissance, and business logic manipulation, Witt said. Defenders need to respond with what he called “real AI, not dashboard glitter”.Automated triage. Autonomous decision-making. Real-time mitigation. Human response alone won’t be fast enough.

Folding AI into Protection

For defenders, deep learning will shift from being an experience to becoming a core component of security operations center (SOC) operations, according to Brennan Lodge, fractional CISO for DeepTempo. The technology will help teams “interpret complex activity at scale and surface attacker progression earlier in the intrusion lifecycle. These models are getting much better at understanding how attacks unfold across time. Not single alerts. Not simple anomalies. Actual attacker behavior.”Dan Shugrue, product director at Digital.ai, expects enterprise security teams to shift from static application controls to “continuous, agentic AI-driven defenses that evolve in real time.”“As attackers increasingly use LLMs [large language models] to reverse-engineer apps within hours, organizations will adopt AI agents that autonomously apply code obfuscation, anti-tampering, and runtime protections in every build,” Shugrue said. “This ‘moving target’ security model will become expected – not experimental – because anything slower will leave applications exposed.”Arun Shrestha, managing director for KeyData Cyber, said that in the coming year, security teams will move from treating AI as a tool to seeing it “as a first-class identity.”“The explosion of AI agents and non-human service accounts is creating an attack surface too large for rules-based security,” said Shrestha, who founded and was CEO of BeyondID, which KeyData Cyber bought in August. “Organizations will need autonomous, AI-native identity defenses that can detect and adapt at machine speed.”AI will also help automate threat-hunting efforts, according to Ashley Jess, senior intelligence analyst with Intel 471. While much of threat hunting today is semi-automated – using curated queries, signatures, and analytics tools – “automation will likely grow, especially with advances in agentic AI, which may be able to assist with detection queries,” Jess said.“Adversaries are also increasingly exploring AI to develop and optimize their kits, so defenders will need to leverage some automation alongside intelligence-driven hunting to keep pace,” she said.

AI Accountability is Coming

“2026 will be the year AI accountability is forced into day-to-day operations,” said Richard Bird, chief security officer for Singulr AI. “Organizations spent much of 2025 trying to appear mature in governance, but the biggest lesson of the year was that most AI risks did not come from rogue models. They came from a lack of visibility and accountability.”Bird added that “a major shift is coming to operations as well. At least one major enterprise is likely to implement AI-only workflows for detection, triage, and remediation with human oversight. At the same time, the obsession with frontier models will fade as enterprises recognize the value of smaller, domain-tuned systems.”AI will wield its influence in the channel as well, according to Larissa Crandall, global vice president of channel and alliances at 1Password. Partners will need to rethink everything, from how they operate and deliver services to how to use AI to differentiate in a crowded market.Traditional resale motions will give way to advisory-led, automation-driven service models that create ongoing value and efficiency for customers,” Crandall said. “Partners that embrace AI, rather than fear or resist it, will unlock new revenue, outpacing competitors and becoming trusted, strategic partners for customers navigating rapidly evolving security demands.”

AI-Armed Attackers

Such use of AI technologies by defenders will be important because defenders will hone their AI chops.“By 2026, adversaries will use AI systems that map entire infrastructures in seconds, identify weak links deep in the supply chain, and shift tactics in real time to bypass defenses,” KeepIT CISO Kim Larsen said. “Hybrid warfare will amplify this trend as hostile actors blend geopolitical intent with AI-enabled automation at scale.”NordVPN CTO Marijus Briedis said the data stored in AI tools like ChatGPT will be a plus for security teams – they can train their models with it – and such sensitive information will also be a target of the growing class of AI-armed hackers.“2026 will also see a dramatic escalation in AI-powered offense and defense, Briedis said. “AI has altered the accessibility and sophistication of cybercrime, lowering barriers for less technical actors while amplifying the capabilities of experienced criminals.”

AI Agents’ Central Role

The expanded use of AI agents this year started to reshape security threats and defenses alike, and they will play a central role for both the good and the bad guys in 2026.“By 2026, AI agents will be capable of executing entire attack chains, initial access, privilege escalation, lateral movement, and data exfiltration, without any human in the loop,” said Mayank Kumar, founding AI engineer at DeepTempo. “This shift means attackers won’t just use AI to write code or automate phishing, but to actively reason about environment structure and take autonomous action. Intrusions that once unfolded over days will compress into minutes.”Tohar Braun, security research tech lead with Orca Security, said that “AI-driven attackers are just now gaining traction, so the ROI they provide is still quite low. Attackers will keep using their tried-and-tested methods for finding initial entry points and misconfigurations, but once inside an environment, the AI agents will really start to shine by processing a lot of information at once.”To protect against such threats, security teams will need to deploy their own agentic AI tools. Russell Humphries, ConnectWise’s executive vice president of product management, said that a problem for SOCs is that analysts and executives are stretched thin, with 70% experiencing burnout.“The path forward and some resolution lies in agentic AI, not as a panacea but as a true force multiplying ‘state change’ for how a modern SOC must work, Humphries said. “By automating the grind, including tasks such as alert triage, data correlation, and routine defense, AI helps teams retain talent, reduce fatigue, and focus their expertise on what truly drives security outcomes.

There are Challenges

Dashlane CTO Frederic Rivain said that “as organizations increasingly deploy AI agents to handle tasks from customer service to code generation, threat actors are licking their chops as these autonomous systems are prime targets for cyberattacks. Unlike traditional applications, AI agents have broad access to data, can make decisions without human oversight, and operate across multiple systems simultaneously, making them both valuable and vulnerable – a losing combo.”Gabrielle Hempel, security operations strategist at Exabeam, said that next year, “the distinction between AI research and adversarial tools will be virtually nonexistent. The same agentic AI models that power defense innovations are now being rapidly repurposed by threat actors, weaponizing everything from social engineering campaigns to autonomous data exfiltration.” 

No Letup

AI is rapidly changing the landscape in cybersecurity as both attackers and defenders deploy it into their portfolios, accelerating pressures to respond and adapt to threats. That will only accelerate in 2026.“We’re entering a full-blown AI arms race, and cybersecurity teams will be asked to play both offense and defense, often at the same time,” Radware’s Witt said. “And yes, it will feel a bit like trying to stop a drone with a butterfly net.”

link

Leave a Reply

Your email address will not be published. Required fields are marked *