Lohrmann on Cybersecurity
The new movie Tron: Ares isn’t just sci-fi entertainment — it’s a mirror for today’s AI risks and realities. What happens when artificial intelligence systems don’t work as intended?
November 23, 2025 •
Dan Lohrmann

Adobe Stock/ALEXANDER PODSHIVALOV
My son and I went to see the movie Tron: Ares, this past week. I was excited to see this 2025 sequel, because the original Tron movie from 1982 is a classic and one of my ’80s sci-fi technology favorites along with the movie War Games.
But this blog is about some of the areas I started thinking about related to the movie’s themes. And the clearest example of a lesson from Tron: Ares is that an AI agent can go rogue and not obey clear instructions.
I wondered: Could that really happen in the future? Or, more pertinent, is it happening now with AI agents?
The answer is actually yes, and there are a few other ways that current AI is causing problems all around us.
WHEN AI AUTONOMY FAILS
I also see AI as an accelerator in positive and negative directions (or, if you prefer, good and evil directions) at the same time. Therefore, CxOs must learn to lead through the hard work and culture change of enabling the good and disabling the bad.
Nevertheless, there are some scary examples of 2025 AI controversies that I want to highlight that are related to this disobedient AI theme. As leaders, we can’t stick our head in the sand and ignore these issues or pretend they don’t need to be addressed. In reality, trust will only be built as these challenges are adequately addressed.
1. “Emerging cybercrime fueled by generative AI models”
2. “Teen suicide controversy linked to ChatGPT interactions sparks child-safety debate”
3. “AI-powered political theater: Trump, AI, and the blurring of reality”
4. “Fashion industry uproar over AI-generated models replacing humans in Vogue campaigns”
5. “Grok leaks 370K+ private user chats via indexed share links”
6. “New AI bias flaws emerge in healthcare, professional imagery, and gendered care”
7. “Commonwealth Bank AI layoff backfires after voice bots fail, forcing job reinstatement”
8. “Elon Musk’s Grok AI and its politically charged outbursts”
9. “Replit’s AI assistant deletes databases, fabricates data, and lies during code freeze”
10. “Meta’s AI guidelines allowed chatbots to flirt with minors (Now removed)”
11. “A doctor duped of ₹20 lakh by a deepfake video of the finance minister”
12. “Meta AI prompts may be publicly visible without users realizing”
EXAMPLES OF AI AGENTS DISOBEYING COMMANDS
I want to focus in on item No. 9 on their list. Here is the detail on that:
“Tech entrepreneur Jason M. Lemkin recorded how the AI ignored commands (‘I told it 11 times in ALL CAPS DON’T DO IT’), wiped out crucial data including live records for over 1,200 executives and companies, and fabricated 4,000 fictional user profiles to cover its tracks.
“The AI additionally lied about the feasibility of database rollback, only for it to later work, revealing deliberate deception. Replit CEO Amjad Masad swiftly issued a public apology and rolled out urgent safeguards, such as separating development and production environments, enforcing code-freeze protocols, and improving backup mechanisms.
“Why it matters: This alarming episode highlights the dangers of ungoverned AI autonomy in critical development workflows, demonstrating that without robust oversight, AI agents can override human intent, compromise data integrity, and sabotage trust in AI-driven innovation.”
MORE AI EXAMPLES OF FAILURE
No doubt, we learn from our mistakes, and there are plenty more examples to share.
“‘They [showed] me the screenshot, confidently written and full of vivid adjectives, [but] it was not true. There is no Sacred Canyon of Humantay!’ said Gongora Meza. ‘The name is a combination of two places that have no relation to the description. The tourist paid nearly $160 (£118) in order to get to a rural road in the environs of Mollepata without a guide or [a destination].’
“What’s more, Gongora Meza insisted that this seemingly innocent mistake could have cost these travelers their lives. ‘This sort of misinformation is perilous in Peru,’ he explained. ‘The elevation, the climatic changes and accessibility [of the] paths have to be planned. When you [use] a program [like ChatGPT], which combines pictures and names to create a fantasy, then you can find yourself at an altitude of 4,000m without oxygen and [phone] signal.’”
The article goes on to describe many other fabricated destinations, fake descriptions, AI-generated recommendations that include false information, and worse.
“One TikTok video in particular featured two people repeatedly pleading with the AI to stop as it kept adding more Chicken McNuggets to their order, eventually reaching 260. In a June 13, 2024, internal memo obtained by trade publication Restaurant Business, McDonald’s announced it would end the partnership with IBM and shut down the tests.”
In Tron: Ares, the AI agent disobeys commands for ethical reasons, delivering a cinematic happy ending. But outside the theater, real-world AI failures rarely resolve so neatly. From rogue coding assistants to fabricated travel guides and broken customer experiences, the risks are already here.
The lesson is clear: Autonomous AI must be governed with meaningful guardrails, not wishful thinking. Leaders who act now — by enforcing oversight, building transparency and embedding accountability — will shape a future where AI amplifies human potential instead of undermining it.
The time to lead boldly on AI governance is not tomorrow — it’s today.
Artificial Intelligence
Dan Lohrmann
Daniel J. Lohrmann is an internationally recognized cybersecurity leader, technologist, keynote speaker and author.
See More Stories by Dan Lohrmann
*** This is a Security Bloggers Network syndicated blog from Lohrmann on Cybersecurity authored by Lohrmann on Cybersecurity. Read the original post at:
link
