Amazon and Google-backed AI start-up Anthropic is making roughly $3 billion (£2.2bn) in annualised revenue, mostly from sales to businesses, Reuters reported.
Traffic on Anthropic’s consumer-oriented chatbot Claude is far below that of OpenAI’s ChatGPT, but the company has seen strong business demand for use cases including code generation, the report said, in an apparent early validation of the use of AI in business environments.
The report came as well-known tech analyst Mary Meeker in a new study highlighted questions around the business models of the major US AI companies, warning that they risk being commodified.
Business growth
Anthropic has seen its annualized revenue rate jump from nearly $1bn in December 2024 to $2bn around the end of March and $3bn at the end of May, Reuters’ report said, citing unnamed sources.
Most of the revenues reportedly come from selling AI models as a service to other companies, showing how business demand is growing.
Code-generation products have experienced major growth in recent months and many of these use Anthropic’s models, the report said.
The growth rate far surpasses that of fast-growing start-ups such as Snowflake, which took six quarters to go from a $1bn to $2bn run-rate.
The report said privately held OpenAI expected to reach more than $12bn in total revenue at the end of this year, up from $3.7bn last year.
But the majority of OpenAI’s revenues come from subscriptions to its ChatGPT chatbot, the company told Bloomberg in late 2024.
The new study from Mary Meeker’s firm Bond Capital noted the rapid pace of growth in generative AI, but also warned of the risk of commoditisation.
Commoditisation risk
AI start-ups require vast amounts of venture-capital funding to train and operate their extremely energy-hungry models, but competitors are emerging in China and elsewhere that could offer comparable features for a far lower price, the study said.
“In the short term, it’s hard to ignore that the economics of general-purpose LLMs look like commodity businesses with venture-scale burn,” the study said.
Chinese AI start-up DeepSeek, which debuted a low-cost, high-performance model in January, last week released the first major update to its R1 reasoning model that it said offers comparable performance to OpenAI’s models while requiring a fraction of the resources.