Nvidia’s B200 GPU Revealed: Power Consumption Reaches 1000W

Dell Discloses Nvidia’s Next-Generation Blackwell AI Chip: B100 to be Launched in 2024, B200 in 2025.


On March 4th, according to a report by Barron’s, a senior Dell executive revealed during the company’s recent earnings call that Nvidia plans to launch its flagship AI computing chip, the B200 GPU, in 2025. The individual GPU power consumption of the B200 is said to reach 1000W. In comparison, Nvidia’s current flagship H100 GPU has an overall power consumption of 700W, while Nvidia’s H200 and AMD’s Instinct MI300X have overall power consumption ranging from 700W to 750W. The B200 had not previously appeared in Nvidia’s roadmap, suggesting the company may have disclosed the information to major partners like Dell ahead of time.

NVIDIA AI architecture
NVIDIA data center/AI product iteration roadmap (Source: NVIDIA)

According to its previously disclosed roadmap, B100 is the next-generation flagship GPU, GB200 is a superchip that combines CPU and GPU, and GB200NVL is an interconnected platform for supercomputing use.

NVIDIA data center ai gpu roadmap
NVIDIA data center/AI GPU base information (Source: Wccftech)

Nvidia’s Blackwell GPU series will include at least two major AI acceleration chips – the B100 launching this year, and the B200 debuting next year.

There are reports that Nvidia's Blackwell GPUs may adopt a single-chip design.
Screenshot of the transcript of Jeffrey W. Clarke, COO and vice chairman of Dell Technologies Group, speaking on the earnings call

There are reports that Nvidia’s Blackwell GPUs may adopt a single-chip design. However, considering the power consumption and required cooling, it is speculated that the B100 may be Nvidia’s first dual-chip design, allowing for a larger surface area to dissipate the generated heat. Last year, Nvidia became TSMC’s second largest customer, accounting for 11% of TSMC’s net revenue, second only to Apple’s 25% contribution. To further improve performance and energy efficiency, Nvidia’s next-generation AI GPUs are expected to utilize enhanced process technologies.

Like the Hopper H200 GPU, the first-generation Blackwell B100 will leverage HBM3E memory technology. The upgraded B200 variant may therefore adopt even faster HBM memory, as well as potentially higher memory capacity, upgraded specifications, and enhanced features. Nvidia has high expectations for its data center business. According to a Korean media report from last December, Nvidia has pre-paid over $1 billion in orders to memory chip giants SK Hynix and Micron to secure sufficient HBM3E inventory for the H200 and B100 GPUs.

Dell and HP
Roadmap of memory technology evolution used in NVIDIA GPUs (Source: NVIDIA)

Both Dell and HP, two major AI server vendors, mentioned a sharp increase in demand for AI-related products during their recent earnings calls. Dell stated that its fourth-quarter AI server backlog nearly doubled, with AI orders distributed across several new and existing GPU models. HP revealed during its call that GPU delivery times remain “quite long”, at 20 weeks or more.

Leave a Comment

Scroll to Top