Saturday, December 21, 2024
HomeAIMint Primer | Strawberry: Can it unlock AI’s reasoning power?

Mint Primer | Strawberry: Can it unlock AI’s reasoning power?

Published on

spot_img


OpenAI plans to release two highly-anticipated models. Orion, potentially the new GPT-5 model, is expected to be an advanced large language model (LLM), while Strawberry aims to enhance AI reasoning and problem-solving, particularly in mastering math.

Why are these projects important?

Project Strawberry (earlier dubbed Q*, or Q-Star) is reportedly a secret OpenAI initiative to improve AI’s reasoning and decision-making for more generalized intelligence. OpenAI co-founder Ilya Sutskever’s concerns about its risks led to CEO Sam Altman’s brief ouster. Unlike Orion, which focuses on optimizing existing LLMs like GPT-4 by cutting computational costs and enhancing performance, Strawberry aims to boost AI’s cognitive abilities, say The Information and Reuters. OpenAI might even integrate Strawberry into ChatGPT to enhance reasoning.

If true, how will they impact the tech world?

For autonomous systems such as self-driving cars or robots, Strawberry could improve safety and efficiency. Future iterations may focus on improving interpretability, making its decision-making processes transparent. Big tech giants like Google and Meta might face heightened competition since clients in healthcare, finance, automobiles and education, that are increasingly relying on AI, embrace the newer, enhanced models of OpenAI. Smaller startups, too, could struggle to compete with the new products, affecting their market position and investment prospects.

How can we be sure OpenAI is developing these?

New investors appear to be keen on investing in OpenAI, which, according to The Wall Street Journal, is planning to raise funds in a round led by Thrive Capital that would value it at more than $100 billion. Apple, Nvidia are likely investors in this round. Microsoft has already invested more than $10 billion in OpenAI, feeding reports of OpenAI boosting its AI models.

But can AI models actually reason?

AI struggles with human-like reasoning. But in March, Stanford and Notbad AI researchers indicated that their Quiet-STaR model could be trained to think before it responds—a step towards AI models learning to reason. DeepMind’s proposed framework for classifying the capabilities and behaviour of Artificial General Intelligence (AGI) models acknowledges that an AI model’s “emergent” properties could give it capabilities such as reasoning, that are not explicitly anticipated by developers of these models.

Will ethical concerns increase?

Despite claims of safe AI practices, big tech faces scepticism due to past misuse of data, copyrights and intellectual property (IP) violations. AI models with enhanced reasoning could fuel misuse, like misinformation. Quiet-STaR researchers admit there are “no safeguards against harmful or biased reasoning”. Sutskever, who proposed what is now Strawberry, launched Safe Superintelligence Inc., aiming to advance AI’s capabilities “as fast as possible while making sure our safety always remains ahead”.

 



Source link

Latest articles

How I learned that smart tech can’t save dumb appliances

For someone who writes a lot about gadgets, I sure hate them sometimes....

US, Israeli officials deny approving sale of Israeli spyware firm to US investors

Officials in the United States and Israel have denied reports their countries had...

Google proposes fix to solve search monopoly

Google said Friday what it thought should change to address a ruling that...

More like this

How I learned that smart tech can’t save dumb appliances

For someone who writes a lot about gadgets, I sure hate them sometimes....

US, Israeli officials deny approving sale of Israeli spyware firm to US investors

Officials in the United States and Israel have denied reports their countries had...