Saturday, August 16, 2025
HomeSoftwareBroadcom Upgrades Jericho Data Centre Chip For AI Age

Broadcom Upgrades Jericho Data Centre Chip For AI Age

Published on

spot_img


Broadcom on Monday released its next-generation Jericho networking chip, which takes aim at artificial intelligence workloads by speeding up traffic over long distances, allowing sites up to 60 miles (100 km) apart to be linked together.

The Jericho4 also improves security by encrypting data, a critical concern when data is passing outside a facility’s physical walls.

The new chip, which uses TSMC’s 3 nanometre manufacturing process, incorporates high-bandwidth memory (HMB), which has also become the preferred memory type for AI accelerator chips from the likes of Nvidia and AMD.

Image credit: Microsoft

Multiple sites

The increased memory is another factor enabling the network chips to transfer data over long distances, said Ram Velaga, senior vice president and general manager of Broadcom’s Core Switching Group.

Essentially the Jericho4 allows multiple, smaller data centres to be linked up into a single, more powerful system, Broadcom said.

This creates more flexibility in the age of AI, whose workloads require massive computing and electrical power.

Cloud infrastructure companies are looking to link together hundreds of thousands of power-hungry AI chips, but a network of 100,000 or 200,000 GPUs requires more power than is typically available in one physical building, Velaga said.

To make the cluster possible, companies can use Jericho4-based networks to link together clusters of server racks across multiple buildings, he said.

Products based on the chip can also help cloud companies move compute workloads closer to customers by creating data centre sites in congested urban areas, where it may be more practical to link together multiple, smaller sites.

The chip complements Broadcom’s Tomahawk line of chips, which connect racks within a data centre, typically at distances under one kilometer.

Broadcom said the Jericho4 can connect more than 1 million processors across multiple data centres and can handle about four times more data than the previous version.

The company said it began shipping the chip this week to early customer such as cloud providers and networking gear manufacturers, with products using it expected to appear in about nine months.

In-house AI chips

Broadcom also makes custom AI accelerator chips for the likes of Facebook parent Meta Platforms, which is working with Broadcom to build new Santa Barbara AI data centre servers.

Broadcom is supplying the custom processors for the servers, which are being manufactured by Taiwanese firm Quanta Computer, Economic Daily News reported.

Meta has ordered up to 6,000 racks of the Santa Barbara servers, which are to replace its existing Minerva servers, the news outlet reported.

OpenAI is reportedly working with Broadcom and TSMC on its first in-house AI accelerator chip, as part of an effort to diversify its supply of specialised processing power.



Source link

Latest articles

Lawyers File AI Slop in Murder Case

This is getting out of hand.Fool Me TwiceYet another team of lawyers was...

Researcher turns gpt-oss-20b into a non-reasoning base model

Want smarter insights in your inbox? Sign up for our weekly newsletters to...

Sam Altman Says the Quiet Part Out Loud, Believes We’re in an AI Bubble

OpenAI CEO Sam Altman just admitted to what the rest of the AI...

More like this

Lawyers File AI Slop in Murder Case

This is getting out of hand.Fool Me TwiceYet another team of lawyers was...

Researcher turns gpt-oss-20b into a non-reasoning base model

Want smarter insights in your inbox? Sign up for our weekly newsletters to...