Sunday, July 20, 2025
HomeAIIs OpenAI exaggerating the powers of its new ChatGPT Agent?

Is OpenAI exaggerating the powers of its new ChatGPT Agent?

Published on

spot_img


That being said, OpenAI has flagged the agent as high-risk under its safety framework, warning it could potentially be used to create dangerous biological or chemical substances. Is this just marketing hype, timed to build momentum for the launch of GPT-5, or a sign that AI agents are genuinely becoming more powerful and autonomous, akin to the agents who protect the computer-generated world of The Matrix?

What is ChatGPT Agent?

Say you want to rearrange your calendar, find a doctor and schedule an appointment, or research competitors and deliver a report. ChatGPT Agent can now do it for you.

The agent can browse websites, run code, analyse data, and even create slide decks or spreadsheets—all based on your instructions. It combines the strengths of OpenAI’s earlier tools—operator (which could navigate the web) and deep research (which could analyse and summarise information)—into a single system. You stay in control throughout: ChatGPT asks for permission before doing anything important, and you can stop or take over at any time. This new capability is available to Pro, Plus, and Team users through the tools dropdown.

How does it work?

ChatGPT Auses a powerful set of tools to complete tasks, including a visual browser to interact with websites like a human, a text-based browser for reasoning-heavy searches, a terminal for code execution, and direct application programming interface (API) access.

It can also connect to apps such as Gmail or GitHub to fetch relevant information. You can log in to websites within the agent’s browser, allowing it to dig deeper into personalised content. All of this runs on its own virtual computer, which keeps track of context even across multiple tools.

The agent can switch between browsers, download and edit files, and adapt its methods to complete tasks quickly and accurately. It’s built for back-and-forth collaboration—you can step in anytime to guide or change the task, and ChatGPT can ask for more input when needed. If a task takes time, you’ll get updates and a notification on your phone once it’s done.

Has OpenAI tested its performance?

OpenAI said on Humanity’s Last Exam (HLE), which tests expert-level reasoning across subjects, ChatGPT Agent achieved a new high score of 41.6, rising to 44.4 when multiple attempts were run in parallel and the most confident response was selected. On FrontierMath, the toughest known math benchmark, the agent scored 27.4% using tools such as a code-executing terminal—far ahead of previous models.

In real-world tasks, ChatGPT agent performs at or above human levels in about half of the cases, based on OpenAI’s internal evaluations. These tasks include building financial models, analysing competitors, and identifying suitable sites for green hydrogen projects.

ChatGPT Agent also outperforms others on specialised tests such as DSBench for data science, and the SpreadsheetBench for spreadsheet editing (45.5% vs Copilot Excel’s 20.0%). On BrowseComp and WebArena, which test browsing skills, the agent achieves the highest scores to date, according to OpenAI.

What are some of the things it can do?

Consider the case of travel planning. The agent won’t just suggest ideas but navigate booking websites, fill out forms, and even make reservations one you give it permission.

You can also ask it to read your emails, find meeting invitations, and automatically schedule appointments in your calendar, or even draft and send follow-up emails. This level of coordination typically required juggling between apps, but the agent manages it in a single conversational flow.

Another example involves shopping and price comparison. You can tell the agent to “order the best-reviewed smartphone under 15,000″, and it can search online stores, compare prices and reviews, and proceed to checkout on a preferred platform. Customer support and task automation are other examples, where the agent is used to troubleshoot an issue, log into support portals, and even file return or refund requests.

How are AI agents typically built?

Unlike basic chat bots, AI agents are autonomous systems that can plan, reason, and complete complex, multi-step tasks with minimal input—such as coding, data analysis, or generating reports.

They are built by combining ways to take in information, think, and take action. Developers begin by deciding what the agent should do, following which the agent collects data like such as or images from its environment. AI agents use large language models (LLMs) like GPT-4 as their core “brain”, which allows them to understand and respond to natural language instructions.

To allow AI agents to take action, developers connect the LLM to things like a web browser, code editor, calculator, and APIs for services such as Gmail or Slack. Frameworks like LangChain help integrate these parts, and keep track of information. Some AI agents learn from experience and get better over time. Testing and careful setup make sure they work well and follow rules.

Does ChatGPT Agent have credible competition?

Google’s Project Astra, part of its Gemini AI line, is developing a multimodal assistant that can see, hear, and respond in real time. Gemini CLI is an open-source AI agent that brings Google’s Gemini model directly to the terminal for fast, lightweight access. It integrates with Gemini Code Assist, offering developers on all plans AI-powered coding in both VS Code and the command line.

Microsoft is embedding Copilot into Windows, Office, and Teams, giving its agent access to workflows, system controls, and productivity tools, soon enhanced by a dedicated Copilot Runtime.

Meta is building more socially focused agents within messaging and the metaverse, which could evolve into utility tools.

Apple is revamping Siri through Apple Intelligence, combining GPT-level reasoning with strict privacy features and deep on-device integration.

Other smart agents include Oracle’s Miracle Agent, IBM’s Watson tools, Agentforce from Salesforce Anthropic’s Claude 3.5, and Perplexity AI’s action-oriented agents through its Comet project, blending search with agentic behaviour.

The competitive advantage, though, may go to companies that can integrate these AI agents into everyday applications and call for action with a single, unified tool – a task that ChatGPT Agent has demonstrated.

Why did OpenAI warn that ChatGPT Agent could be used to trigger biological warfare?

OpenAI claimed ChatGPT Agent’s superior capabilities could, in theory, be misused to help someone create dangerous biological or chemical substances. However, it clarified that there was no solid evidence it could actually do so.

Regardless, OpenAI is activating the highest level of safety measures under its internal ‘preparedness framework’. These include thorough threat modeling to anticipate potential misuse, special training to ensure the model refuses harmful requests, and constant monitoring using automated systems that watch for risky behaviour. There are also clear procedures in place for suspicious activity.

Should we take this risk seriously?

Ja-Nae Duane, AI expert and MIT Research Fellow and co-author of SuperShifts, said the more autonomous the agent, the more permissions and access rights it would require. For example, buying a dress requires wallet access; scheduling an event requires calendar and contact list access.

“While standard ChatGPT already presents privacy risks, the risks from ChatGPT Agent are exponentially higher because people will be granting it access rights to external tools containing personal information (like calendar, email, wallet, and more). There’s a significant gap between the pace of AI development and AI literacy; many people haven’t even fully understood ChatGPT’s existing privacy risks, and now they’re being introduced to a feature with exponentially more risks,” he said.

Duane added that the key risks included data leaks, mistaken actions, prompt injection, and account compromise, especially when handling sensitive information. Malicious actors, he warned, could exploit them by manipulating inputs, abusing tool access, stealing credentials, or poisoning data to bias outputs. Poor third-party integration and an over-reliance of them could worsen the impact, while the agent’s “black box” nature would make it hard to trace errors, he added. In the wrong hands, these agents could be weaponised for fraud, phishing, or even to generate malware.

What are the other concern areas for enterprises?

Developers are increasingly deploying AI agents across IT, customer service, and enterprise workflows. According to Nasscom, 46% of Indian firms are experimenting with these agents, particularly in IT, HR, and finance, while manufacturing leads in robotics, quality control, and automation.

Beyond concerns around hallucinations, security, privacy, and copyright or intellectual property (IP) violations, a key challenge for businesses is ensuring a return on investment. Gartner noted that many so-called agentic use cases could be handled by simpler tools and predicted that more than 40% of such projects would be scrapped by 2027 over high costs, unclear value, or inadequate risk controls.

Of the thousands of vendors in this space, only around 130 are seen as credible; many engage in “agent washing” by repackaging chatbots, robotic process automation (RPA), or basic assistants as autonomous agents. Nasscom corroborated these concerns, highlighting that 62% of enterprises were still only testing agents in-house.

Why is ‘humans-in-the-loop’ a must?

OpenAI CEO Sam Altman advised granting agents only the minimum access needed for each task, not blanket permissions. Nasscom believes that to scale responsibly, enterprises must prioritise human-AI collaboration, trust, and data readiness. It has recommended firms adopt AI agents with a “human-in-the-loop” approach, reflecting the need for oversight and contextual judgment.

According to Duane, users must understand both the tool’s strengths and its limits, especially when handling sensitive data. Caution is key, as misuse could have serious consequences. She also emphasised the importance of AI literacy, noting that AI was evolving far faster than most people’s understanding of how to use it responsibly.



Source link

Latest articles

Elon Musk’s Neuralink Caught Filing for Benefits Meant for Small Businesses Run by Minorities

Image by Jim Dyson/RedfernsElon Musk appears to have been caught in a major...

Former Google CEO Eric Schmidt warns of AI superintelligence outpacing Earth’s energy limits: ‘Chips will outrun power needs’

As the world marvels at the rapid evolution of artificial intelligence—writing code, diagnosing...

5 key questions your developers should be asking about MCP

Want smarter insights in your inbox? Sign up for our weekly newsletters to...

Waymo Responds to Elon Musk’s “Schlong Map” by Showing That Theirs Is Bigger

It's on.Measuring ContestTesla CEO Elon Musk's infatuation with low-brow humor and dad jokes...

More like this

Elon Musk’s Neuralink Caught Filing for Benefits Meant for Small Businesses Run by Minorities

Image by Jim Dyson/RedfernsElon Musk appears to have been caught in a major...

Former Google CEO Eric Schmidt warns of AI superintelligence outpacing Earth’s energy limits: ‘Chips will outrun power needs’

As the world marvels at the rapid evolution of artificial intelligence—writing code, diagnosing...

5 key questions your developers should be asking about MCP

Want smarter insights in your inbox? Sign up for our weekly newsletters to...