One would think that a $40,000+ GPU would be the best graphics card for gaming, but the truth is more complicated than that. In fact, this Nvidia GPU can’t keep up with integrated graphics solutions.
Now, before you get too upset, you should know that I’m referring to Nvidia’s H100, which houses the GH100 (Grace Hopper) chip. It’s a powerful data center GPU designed to handle high-performance computing (HPC) tasks – not PC gaming. It does not have any display outputs, and despite its wide capabilities, it also does not have any coolers. This is because, again, you’ll find this GPU in a data center or server setup, where it will be cooled using powerful external fans.
While it has “only” 14,592 CUDA cores (which is less than the RTX 4090), it also has an insane amount of VRAM and a massive bus. In total, the GPU contains 80 GB of HBM2e memory, divided into five HBM clusters, each connected to a 1024-bit bus. Unlike Nvidia’s consumer GPUs, this card also still has NVLink, which means it can be connected to work seamlessly in multi-GPU systems.
The question remains: why exactly is this type of GPU so bad in general and gaming use?
To prove the case, YouTuber Gamerwan Four of these H100 graphics cards for testing, and I decided to put one in a regular Windows system to check its performance. This was a PCIe 5.0 model, and had to be paired with the RTX 4090 due to a lack of display outputs. Gamerwan also 3D printed an external cooler specifically designed to keep the GPU running smoothly.
It takes a bit of work for the system to recognize the H100 as a decent GPU, but once Gamerwan managed to get over the hurdles it also managed to get ray tracing support working. However, as we discovered later during testing, there isn’t much support for anything else on a non-datacenter platform.
On a 3DMark Time Spy benchmark test, the GPU only hit 2,681 points. For comparison, the average score for the RTX 4090 is 30,353 points. This score puts the H100 somewhere between the GTX 1050 and GTX 1060 for consumers. More importantly, it’s almost the same as AMD’s Radeon 680M, which is an integrated GPU.
Gaming tests were also poor, with the graphics card hitting an average of 8 frames per second (fps) in the game Red Dead Redemption 2. The lack of software support rears its ugly head here — although the H100 can run at a maximum of 350 watts, the system never seems to exceed 100 watts, which results in a significant drop in performance.
There are several different reasons for this poor display of gaming powers. For example, while the H100 is a super-powerful graphics card on paper, it’s very different architecturally from the AD102 GPU powering the RTX 4090. It only has 24 ROPs, which is quite a bit. A significant drop from the 160 ROPs that the RTX 4090 has. Additionally, only four out of the 112 texture processing (TPC) clusters can render graphics workloads.
Nvidia’s consumer GPUs receive a lot of support from the software side in order to run well. This includes drivers, but also system support from developers – both in games and in modular software. There are no drivers that optimize the performance of this GPU for gaming, and the result, as you can see, is pretty bad.
We’ve already seen the power of drivers with the Intel Arc, where the hardware remained the same, but improved driver support provided performance gains that made the Arc an acceptable option if you’re buying a budget GPU. With no Nvidia Game Ready drivers and a lack of access to the rest of Nvidia’s software suite (including the always amazing DLSS 3), the H100 is a $40,000 GPU that has no business running any kind of game.
At its core, this is a computing GPU and not a graphics card in the same way most of us know it. Built for all types of HPC tasks, with a strong focus on AI workloads. Nvidia maintains a strong lead over AMD when it comes to AI, and cards like the H100 play a big part in that.
Editors’ recommendations
“Freelance web ninja. Wannabe communicator. Amateur tv aficionado. Twitter practitioner. Extreme music evangelist. Internet fanatic.”