AI agents may battle AI attackers, but at RSAC 2025, it’s still about improving security workflow

At first glance, as you wander onto this expansive Moscone Center expo floor, witness some of the near-million-dollar booths, you might wonder if the only kind of budget consideration on the chief information security officer’s mind is something related to responding to new artificial intelligence threats with AI agents — or perhaps worrying about monster trucks or goats crushing or eating a user’s mobile phone.
But beyond the glitz, AI agents can barely scratch the surface on the shortage of skilled cybersecurity talent available to address the exploits our software faces today. In this regard, newer AI-based security tools are really just the next incremental boost in automation toward protecting an exponentially expanding attack surface exacerbated by AI.
Attending the 34th annual RSAC 2025 in San Francisco with 44,000 others, I really started to understand why the organizers would pick a theme like “Many Voices. One Community.” It will take people from different walks of life, many of whom did not envision themselves as cybersecurity professionals, working together, to get us out of this AI cybersecurity mess we’ve created.
Here’s a rundown of some themes expressed at the conference and a sampling of interesting information resources and vendors I talked to that are addressing modern challenges with unique approaches and products:
Improving awareness across the hybrid cloud stack
Fundamentally, most security solutions (SIEM, SOAR, UEBA, XDR and the like) are data management solutions — as all threats and vulnerabilities can only be perceived through data movement and activity within volumes and networks, which emit telemetry signals such as logs and traces.
OpenText was one of the major brands there with a broad cybersecurity portfolio that blends both enterprise and consumer threat awareness data behind the scenes. Through machine learning, its new OpenText Core TDR platform can detect difficult-to-spot unique insider threats from privileged users, whether or not they are using AI tools.
Checkmarx announced an early access program for its agentic AI-powered control plane for application security posture management or ASPM), which declaratively scans all packages in the repository, to show developers prioritized vulnerabilities and unknown code references directly within the developer’s IDE.
Bot security management vendor Netacea recently donated its BLADE open-source framework, which is now accepted by the OWASP community, allowing experts to recognize business logic attack definitions for errant automation and AI agent behaviors beyond the currently known CVEs referenced in MITRE ATT&CK.
Black Duck offers a broad set of open source and managed tools for software composition and code analysis, citing a Ponemon survey about the risk data information technology organizations are currently using to inform their software security supply chains. “As a CISO, we are looking for AI to help humans triage alerts and focus the subject matter expert’s attention,” said Bruce Jenkins, chief information security officer at Black Duck. “However, I’m concerned that the industry will rush down the road of AI under the pretense that it is going to solve all of our problems.”
New security approaches on the edge of experience
They say “you can’t judge a book by its cover,” but sometimes, attackers armed with AI can discover a lot more than you expect, and innovative vendors are answering the call.
Cool startup MirrorTab can obfuscate any browser interface from AI agents and browser extension bots using a super-lean video encoding and “pixel shimmering” technology that stymies on-screen text recognition, script injection attempts and eavesdropping plugins, while still being speedily rendered for the end user.
Well-known password management vendor LastPass is extending its awareness of browser-based logins to provide enterprise-wide discovery and awareness of employee SaaS application usage, which also presents a novel way to prevent unsanctioned “rogue AI” app services training on company and customer data.
While most security vendors are hunting for incoming attacks and internal threats, BrandShield provides an outside-in approach with its AI-driven external threat protection service, which continuously trawls the world’s online swamp of suspicious domains, social media and the dark web for impersonators, phishing scammers and intellectual property thieves, then issues takedown orders to offending entities.
By now, everyone’s heard of two-factor authentication, which is now known to be just a fig leaf’s worth of protection from social engineering, deepfakes and suspicious communications.
“We do more than 50-factor authentication, and one of those factors is an algorithm called rPPG, where you join a call, we have a bot that joins the call, and it measures the blood circulation in your cheeks and your forehead to see if you’re real or if you’re fake. But that’s just one inference point,” said Sandy Kronenberg, chief executive officer of netarx.
Their AI models also review and roll in the user’s IP address, GPS, domain provider, DKIM, SPF, DMARC and more, reporting back to the user through what they call a “Flurp” which is sort of like a traffic light, telling them to stop or slow down if something’s fishy on the other end of a call or session.
Learning to trust the modern zero-trust network
It stands to reason we’d see dedicated corporate governance tools for AI at RSAC, and Zenity provides an observability platform that monitors the enterprise’s estate of copilots and chatbots, restricting the availability of sensitive or private data from modeling and data design activities. The solution informs owners of AI agents at runtime if a model or “rogue” agent seems to be operating outside policy within or outside of a private network, reporting results and any remediation actions taken to platforms such as Splunk or ServiceNow.
Tufin offered some of the first hybrid cloud ready, software-driven firewalls and security policy engines on the market. At RSAC, it announced a new AI agent feature called TufinMate. Network engineers can just chat with it in Teams or Slack, and the agent will search across the topology to pinpoint root causes for application outages or identify the criticality of vulnerabilities within the context of the network that they manage.
Monitoring live data from most known discovery and configuration management tools, RedSeal Inc. provides an inventory and interactive network map of cloud, virtual and physical devices, connections, host configurations and endpoints down to Layer 2. Prioritized risky attack paths can be blocked automatically by policy or sent to SIEM or incident management platforms for resolution.
New hunting grounds for AI threats
While there are plenty of mature static and dynamic application security testing, or SAST/DAST, tools on the market, AI development tooling and code generation presents a whole new set of challenges to cyber teams.
“Now we are seeing new attack vectors that can be leveraged of LLM vulnerabilities getting introduced into the world that didn’t exist before,” said Gadi Bashvitz, CEO of Bright Security. “What if a bad actor asks an OpenAI model to share an employee’s credentials, or for a recipe for making napalm?” The security testing and remediation vendor scans application programming interfaces and AI code generator output to assure that AI-generated code matches application intent, then recommends and validates fixes.
“You can try and anchor all the activities required for an AI to create or learn a new process before and after taking [a security] action,” Monzy Merza, CEO of Crogl Inc., said at a roundtable. “You can look at logs and inspect the APIs and container-to-container traffic, but there’s a better need to have a real argument as to how AI really happens, or we are too abstracted.”
“There’s a lot of excitement around AI, but there’s also a lot of hype, pitching agents and black boxes that are silver bullets but won’t solve all your problems,” said Thomas Kinsella, co-founder of Tines, who was demonstrating their new AI-powered Workbench solution. “You need a deterministic approach, you need guardrails, and you need a human in the loop. And then you need the AI to be able to succeed consistently, doing a task that it’s really good at.”
Balancing AI risk with velocity
Blocking employees from using “shadow AI” services is going to be even harder than preventing them from signing up for SaaS and cloud services, because there is so much fear of falling behind without AI.
“Our whole goal is getting away from ‘allow and block’ to helping companies safely adopt AI, so if we can gain visibility into how employees are using AI and what data they are sharing with it, we can create controls and policies,” said Randy Birdsall, co-founder and chief technology officer of SurePath AI. “We can actually help adoption with our bring-your-own-model approach where someone could go to Vertex or they could go to AzureAI or they could leverage Bedrock and use those model gardens to bring their model to us, and we can give them a fully managed portal experience or a managed rack solution with group-based RBAC around the data that’s being brought into that RAG experience.”
Vercel is quickly rising as a platform for rapidly deploying and scaling web applications that increasingly leverage vetted AI inference models and agents from their marketplace. “I think the reality is, everyone is accelerating with AI,” said Vercel CISO Ty Sbano. “Our AI product v0.dev enables people to just prompt their way or vibecode to an output. A big part of that is because we’re indexing on React and nextJS natively. By having more customers doing this, and by dogfooding internally with our own employees, we are able to accelerate the journey from less AI code hallucinations to greater accuracy.”
Automating and simulating persistent threats
As we’ve seen in software delivery, testing and observability workflows, simulation always follows automation. That’s why ethical internal hacker teams are setting up virtual kill chains to prove out application readiness.
“GenAI is by definition not creative, it’s reductive. An LLM makes generalizations to predict the next word, or the next step in a sequence, said Ann Nielsen, product marketing lead at Cobalt, an offensive security service provider and producer of an annual State of Pentesting report. “We automate what we can to make humans more efficient, so they don’t have to read irrelevant scans all day, but human pentesters really are better at running novel and interesting attacks.”
SafeBreach allows companies to map current exposures and past attack paths, continuously running breach and attack simulation of things like credential theft and the lateral movement of virtual bad actors. Generative AI-based simulations can further attempt to gain a foothold and grab sensitive data or encrypt assets within a pre-production or live system.
“People here are talking about agentic AI, but really, there’s still this overarching theme where there’s just never enough people to do the work, and there’s also a shortage of experience — even existing security practitioners have only seen what they’ve seen,” said Debbie Gordon, CEO of Cloud Range. It just announced a partnership with IBM to create “cyber campuses” and give students simulation-based training experiences on lifelike virtual networks populated with threat actors and vulnerabilities.
The Intellyx Take
In the end, AI-based security tooling will never be able to save us from relentless AI-based attacks. It’s really going to take human expertise, education and awareness to save our digital circulation system from the evolving threats we have introduced.
It’s a good thing the RSAC event can independently cultivate a cybersecurity community of so many unique practitioners, vendors and end user companies at this summit. We will absolutely need to work together across organizations and nations to face an infinite number of bad actors with new and novel attack capabilities, thanks to the introduction of AI.
Jason English is principal analyst and chief marketing officer at Intellyx. He wrote this article for SiliconANGLE. At the time of writing, Tines is an Intellyx customer. No other companies mentioned are Intellyx customers. RSAC covered the analyst’s attendance cost for the event, a standard industry practice. ©2025 Intellyx B.V.
Photo: RSAC/X
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU
link