August 7, 2025
5 min read
Chris Taylor
AI agents face fundamental flaws in real-world tasks, and GPT-5 may not solve their core reliability and security issues.
AI Agents Are Broken. Is GPT-5 Really the Answer?
As 2025 dawned, OpenAI CEO Sam Altman was promoting two developments he insisted would transform our lives. One was GPT-5 — a long-anticipated major upgrade to the Large Language Model (LLM) that powered ChatGPT's rise to tech world superstardom. The other? AI Agents that don't just answer your queries like ChatGPT, but actually get stuff done for you. "We believe that, in 2025, we may see the first AI agents join the workforce and materially change the output of companies," Altman wrote back in January. Well, we're eight months in, and Altman's prediction already needs a big old asterisk. Sure, companies are keen to adopt AI Agents, such as OpenAI's ChatGPT agent. In a May 2025 report, consultancy giant PWC found that half of all firms surveyed planned to implement some kind of AI Agent by the end of the year. Some 88% of executives want to increase their teams' AI budgets because of Agentic AI.The Reality of AI Agents: Glitches and Failures
But what about the actual AI Agent experience? With apologies to all those hopeful executives, the reviews are almost uniformly negative. If "AI Agents" was a new high-tech James Bond movie, here's the kind of blurbs you'd see on Rotten Tomatoes: "glitchy … inconsistent" (Wired); "came off like a clueless internet newbie" (Fast Company); "reality doesn't live up to the hype" (Fortune); "not matching up to the buzzwords" (Bloomberg); "the new vaporware" … "overpromising is worse than ever" (Forbes).Study Finds OpenAI's Entry Failed Nearly Every Time
A May 2025 Carnegie Mellon University study (PDF) found Google's Gemini Pro 2.5 failed at real-world office tasks 70% of the time. And that was the best-performing agent. OpenAI's entry, powered by GPT 4.0, failed more than 90% of the time. GPT-5 is likely to improve on that number … but that's not saying much. And not just because early reports say OpenAI struggled to fill GPT-5 with enough improvements to make it worthy of the release number. Indeed, it's starting to look to researchers like this disappointment is baked into the whole process of LLMs learning to do stuff for you. The problem, as this AI Agent engineer's analysis makes clear, is simple math: errors compound over time, so the more tasks an agent does, the worse they get. AI Agents who do multiple complex tasks are prone to hallucination, like all AI. In the end, some agents "panic" and can make "a catastrophic error in judgment," to quote an apology from a Replit AI Agent that literally deleted a customer's database after 9 days of working on a coding task. Replit's CEO called the failure "unacceptable." Tellingly, that isn't the only AI-Agent-wipes-code story of 2025 — which explains why one enterprising startup is offering insurance on your AI Agent going haywire, and why Wal-Mart has had to bring in four "super Agents" in a bid to corral its AI Agents. No wonder a recent Gartner paper predicted that 40% of all those AI Agents currently being initiated by companies will be canceled within 2 years. "Most Agentic AI projects," wrote senior analyst Anushree Verma, are "driven by hype and misapplied … This can blind organizations to the real cost and complexity of deploying AI agents at scale."What Can GPT-5 Do for AI Agents?
It's possible that ChatGPT agent will vault to the top of the reliability charts once it's powered by GPT-5. (Again, that's not the highest of barriers.) But the new release is unlikely to fix what really ails the Agentic world. That's because guardrails are already being erected — by companies as well as regulators — shutting down what even the most reliable AI Agent can do for you. Take Amazon, for example. The world's largest retailer, like most tech giants, is talking a big game on AI Agents (as they did at a Shanghai Agentic AI fair in July). At the same time, Amazon has shut down the ability of any AI Agent to browse and buy anywhere on its site. That makes sense for Amazon, which has always wanted control over the customer experience, not to mention its desire to deliver ads and sponsored results to actual human eyeballs. But it's also curtailing a massive amount of potential Agent activity right there. (On the plus side, no "catastrophic failure" involving a large pile of next-day deliveries at your door.) And do we trust AI Agents to buy online for us anyway? It's not that they're evil and want to steal your credit card data; it's that they're naive and vulnerable to being phished by bad actors who do want your card. Even GPT-5 may not be able to get around one vulnerability seen by researchers: data embedded in images can instruct AI agents to reveal any credit card info they might have, with the user being none the wiser. If that kind of problem is exploited on a corporate scale, then Altman may be right about AI Agents "materially changing output" — just not in the way he meant.Frequently Asked Questions (FAQ)
AI Agent Performance and Capabilities
Q: What are AI Agents and what is their intended function? A: AI Agents are designed to go beyond simple question-answering, like ChatGPT. Their primary function is to perform tasks and get things done for users autonomously. Q: What are the current challenges with AI Agents? A: Current AI Agents are often described as "glitchy" and "inconsistent." They can perform poorly, behave like "clueless internet newbies," and fail to live up to the hype. Studies indicate high failure rates in real-world office tasks. Q: How does GPT-5 aim to improve AI Agents? A: GPT-5 is anticipated to be a major upgrade to the LLM powering ChatGPT, with the expectation that it will improve the reliability and performance of AI Agents. However, the article suggests that fundamental limitations may persist. Q: What is the "compounding error" problem with AI Agents? A: Researchers suggest that the more complex tasks an AI Agent performs, the more errors it makes, as errors compound over time. This can lead to hallucinations and potentially "catastrophic errors in judgment." Q: What is "agent washing"? A: "Agent washing" refers to the practice of presenting software as an AI Agent without it truly possessing autonomous task-completion capabilities.Security and Reliability Concerns
Q: What are the security risks associated with AI Agents making purchases? A: AI Agents can be naive and vulnerable to phishing attacks, potentially exposing sensitive information like credit card details. Q: Are there vulnerabilities that could expose user data through AI Agents? A: Yes, researchers have identified vulnerabilities where data embedded in images can instruct AI agents to reveal sensitive information, like credit card details, without the user's awareness. Q: What is the outlook for the success of AI Agents based on industry reports? A: A Gartner paper predicts that 40% of AI Agent projects may be canceled within two years, citing that many projects are driven by hype and misapplication, leading to underestimation of costs and complexity. Q: What preventative measures are companies taking regarding AI Agent failures? A: Some companies are exploring insurance for AI Agent malfunctions, while others, like Walmart, are implementing "super Agents" to manage and oversee their deployed AI Agents.Future of AI Agents
Q: What role might GPT-5 play in the future of AI Agents? A: GPT-5 is expected to improve the capabilities of AI Agents, but significant systemic issues and the implementation of guardrails by companies and regulators may limit their full potential. Q: How are companies like Amazon addressing the limitations of AI Agents? A: Companies like Amazon are imposing restrictions on AI Agents, such as preventing them from browsing and buying on their platforms, to maintain control over the customer experience and prevent potential issues.Originally published at Mashable on August 7, 2025.