The company has already started testing the DGX B200 platform, which includes eight B200 GPUs providing up to 1.4TB of HBM3E memory with up to 64TB/s throughput. This platform is designed to train and retrain powerful AI models.
The new Blackwell architecture shows significant performance gains: According to NVIDIA, the DGX B200 reaches 72 petaflops for training and 144 petaflops for inference, making it much more efficient than previous models.
Note that in addition to OpenAI, other tech giants such as Google, Microsoft, and Tesla have also shown interest in Blackwell. At the same time, it is no less interesting that this particular architecture is the basis of the RTX 50XX series of video cards.
Source: Ferra

I am a professional journalist and content creator with extensive experience writing for news websites. I currently work as an author at Gadget Onus, where I specialize in covering hot news topics. My written pieces have been published on some of the biggest media outlets around the world, including The Guardian and BBC News.