Nvidia has pulled the wraps off its new Hopper GPU architecture at its AI-based GTC conference. As expected the chip is a beast, packing 80 billion transistors into a gigantic 814mm monolithic die. It features PCIe Gen 5 connectivity, and uses up to six stacks of High-Bandwidth-Memory (HBM). The technology is the official replacement for the Ampere GA100, which launched two years ago. Nvidia will be offering the H100 in a variety of products designed to accelerate AI-based enterprise workloads.
Hopper is a significant step forward for Nvidia. Despite the die being roughly the same size as the Ampere-based GA100 that preceded it, it has almost 40 percent more transistors. This is due to the company’s transition from TSMC’s 7nm node to TSMC’s 4nm process. It’s also transitioned from the GA100’s 40GB or 80GB of HBM2 to 80GB of HBM3 memory on a 5,120-bit wide memory bus.
This allows for up to 3TB/s of memory bandwidth. Nvidia claims 20 H100s linked together “can sustain the equivalent of the entire world’s internet traffic.” It’s an odd comparison for a graphics card, even a data center product.
One of the more interesting advancements with Hopper is the inclusion of DPX instructions. Nvidia claims DPX can accelerate “dynamic programming” used by a lot of algorithms across many scientific industries. This includes the Floyd-Warshall algorithm that’s used to find the best routes for autonomous fleets. It will also boost the Smith-Waterman algorithm used in sequence alignment for DNA and protein classification and folding. The company states Hopper can speed up these workflows by 40X compared to using CPUs, and 7X faster than with previous-gun GPUs.
The arrival of the H100 means the company also has new DGX workstations. Each DGX H100 system will offer eight H100 GPUs connected by Nvidia’s 4th generation NVLink technology. This new connector provides 1.5X the bandwidth of the previous generation, for 900GB/s of bandwidth. It can scale up via an external NVLink switch to connect 32 nodes for a DGX SuperPOD super computer. A single DGX 100 allows up to 32 petaflops of FP8 performance.
Nvidia’s new architecture is named after Grace Hopper, who was an early pioneer in computer programming. The release of Hopper was anticipated, as it’s been on Nvidia’s roadmaps for quite some time. Also, Nvidia typically releases a data center version of its new technology before the gaming version arrives, so this is par for the course. The replacement for Ampere for gamers is named after another female IT trailblazer named Ada Lovelace, as we’ve previously reported. Nvidia’s H100 GPU will be available in the third quarter of 2022, in both SXM and PCIe form factors. You can watch Nvidia’s GTC keynote address here.
Now Read:
- Leaks Reveal Nvidia 40-Series With Massive L2 Cache, Almost Double the CUDA Cores
- Rumor: Nvidia’s RTX 4000 Goes Nuclear with 850W TGP
- Nvidia Reportedly Prepping Two New RTX 3050s, One with Just 4GB of VRAM
from ExtremeTechExtremeTech https://ift.tt/tnlGorZ
ليست هناك تعليقات:
إرسال تعليق