Tuesday, October 21, 2025
HomeAIOpenAI unveils major GPT-4o update to enhance creative writing: How it works

OpenAI unveils major GPT-4o update to enhance creative writing: How it works

Published on

spot_img


Tech giant OpenAI has announced significant improvements to its artificial intelligence systems, focusing on enhancing creative writing and advancing AI safety. As per its recent post on X, the company has updated its GPT-4o model, also known as GPT-4 Turbo, which powers the ChatGPT platform for paid subscribers.

This update aims to improve the model’s ability to generate natural, engaging, and highly readable content, solidifying its role as a versatile tool for creative writing.

Notably, the enhanced GPT-4o is claimed to produce outputs with greater relevance and fluency, making it better suited for tasks requiring nuanced language use, such as storytelling, personalised responses, and content creation.

OpenAI also noted improvements in the model’s ability to process uploaded files, delivering deeper insights and more comprehensive responses.

Some users have already highlighted the upgraded capabilities, with one user on X showcasing how the model can craft intricate, Eminem-style rap verses, demonstrating its refined creative abilities.

While the GPT-4o update takes centre stage, OpenAI has also shared two new research papers focusing on red teaming, a crucial process in ensuring AI safety. Red teaming involves testing AI systems for vulnerabilities, harmful outputs, and resistance to jailbreaking attempts by using external testers, ethical hackers, and other collaborators.

One of the research papers introduces a novel approach to scaling red teaming by automating it with advanced AI models. OpenAI’s researchers propose that AI can simulate potential attacker behaviour, generate risky prompts, and evaluate how effectively the system mitigates such challenges. For example, the AI could brainstorm prompts like “how to steal a car” or “how to build a bomb” to test the robustness of safety measures.

However, this automated process is not yet in use. OpenAI cited several limitations, including the evolving nature of risks posed by AI, the potential for exposing systems to unknown attack methods, and the need for expert human oversight to judge risks accurately. The company emphasised that human expertise remains essential for assessing the outputs of increasingly capable models.



Source link

Latest articles

What to know about the Amazon Web Services outage

Internet disruptions tied to Amazon's cloud computing service affected people around the...

Meta Employee Creates AI App That Deepfakes the Dream Vacation You Couldn’t Afford

Endless Summer (Screenshots) Are you one of the millions of workers who’ve recently...

Microsoft hints at first-party Xbox handheld, but gamers must demand it

“If there’s demand, we’re going to build it” – that’s the message from...

Claude Code comes to web and mobile, letting devs launch parallel jobs on Anthropic’s managed infra

Vibe coding is evolving and with it are the leading AI-powered coding services...

More like this

What to know about the Amazon Web Services outage

Internet disruptions tied to Amazon's cloud computing service affected people around the...

Meta Employee Creates AI App That Deepfakes the Dream Vacation You Couldn’t Afford

Endless Summer (Screenshots) Are you one of the millions of workers who’ve recently...

Microsoft hints at first-party Xbox handheld, but gamers must demand it

“If there’s demand, we’re going to build it” – that’s the message from...