July 30, 2025
5 min read
Alex Knapp
OpenAI's new AI agent bypasses bot checks by clicking 'I am not a robot' boxes, raising challenges for web security and bot detection.
The Wiretap: OpenAI Agent Outsmarts Bot Checks by Clicking 'I Am Not A Robot' Box
The Wiretap is your weekly digest of cybersecurity, internet privacy, and surveillance news. One of the constant bits of friction in navigating the modern internet is proving to the site you’re browsing that you are, in fact, human. Often you can prove it by simply checking a box saying so. But in the brave new world of agentic AI, such basic checks won’t be enough to catch AI agents wandering around the internet to do tasks on their owners’ behalf. Ars Technica reported that OpenAI’s new agent, which uses its own browser to access the internet and perform tasks, was observed by a Reddit user checking one of those “I am not a robot” boxes. As it did so, it provided the following narration: “I'll click the 'Verify you are human' checkbox to complete the verification on Cloudflare. This step is necessary to prove I'm not a bot and proceed with the action.” In this particular case, the assistant didn’t face one of the common puzzles aimed at catching bots – the ones that ask you to identify all the pictures with a bicycle or to move pieces of an image around to have it the right way up. But it’s just a matter of time before agents can solve those too. When the bots get so sophisticated they act like humans, the premise of web “captchas” starts to break down. How do you then protect websites from unwanted, malicious bot traffic? And how do you design sites so that agents representing real people can navigate them effectively? Let’s just hope a web designed for bots isn’t that much more annoying for us lowly humans to navigate.THE BIG STORY:
This $120 Million Startup’s AI Will Teach You How To Suck Less At Security
People are often the weakest link in the cybersecurity chain. Just last week, cleaning product giant Clorox claimed a cyberattack that may have caused as much as $380 million in damages was the result of a contracted service desk staffer resetting a password for a hacker pretending to work for the company. IT departments are aware of the risk of human error, of course, and try to address it with education. Usually, this means a few emails and some simple training. But the advice in these types of training is generalized and only rarely tailored to the specific needs of staff. It’s no wonder people never bother to read those emails. This is the problem that cybersecurity startup Fable wants to tackle with a personalized approach. Founded in 2024 by Nicole Jiang, 31, and Dr. Sanny Liao, 42, who spent years at $5.1 billion cybersecurity company Abnormal, Fable claims its AI helps determine which employees need help improving their security practices and offers custom tips and guidance to them. Read more at Forbes.Stories You Have To Read Today
- Pro-Ukrainian hacker group Silent Crow took credit for a cyberattack that crippled IT systems of Russian airline Aeroflot, leading to dozens of flights being grounded.
- The viral app Tea, which enabled women to anonymously post images and comments about men they dated, suffered a cyberattack exposing data about thousands of users.
- Researchers found security vulnerabilities in door-to-door luggage service Airportr that would enable hackers to access users’ flight itineraries and personal information. The bugs could also allow cybercriminals to redirect the final destination of someone’s luggage.
- AI Agents Capabilities and Risks: Delve deeper into the potential and challenges of AI agents in various sectors. Read more
- AI-Driven Crypto Trading Tools: Explore how AI is reshaping market strategies and trading. Discover more
- The Rise of Agentic AI in Cybersecurity: Understand the evolving landscape of AI in cybersecurity. Learn more
- How to Use Google Gemini for Smarter Crypto Trading: Discover how AI tools can assist in your trading journey. Find out how