Warning: getimagesize(): http:// wrapper is disabled in the server configuration by allow_url_fopen=0 in /home/tvoitete/voiceofthe.net/wp-content/plugins/td-cloud-library/shortcodes/header/tdb_header_logo.php on line 786

Warning: getimagesize(http://voiceofthe.net/wp-content/uploads/2020/08/Voice-Of-The-Net-logo-300x50.png): failed to open stream: no suitable wrapper could be found in /home/tvoitete/voiceofthe.net/wp-content/plugins/td-cloud-library/shortcodes/header/tdb_header_logo.php on line 786

Warning: getimagesize(): http:// wrapper is disabled in the server configuration by allow_url_fopen=0 in /home/tvoitete/voiceofthe.net/wp-content/plugins/td-cloud-library/shortcodes/header/tdb_header_logo.php on line 786

Warning: getimagesize(http://voiceofthe.net/wp-content/uploads/2020/08/Voice-Of-The-Net-logo-300x50.png): failed to open stream: no suitable wrapper could be found in /home/tvoitete/voiceofthe.net/wp-content/plugins/td-cloud-library/shortcodes/header/tdb_header_logo.php on line 786

NVIDIA Reveals HGX H200: AI Accelerator Based On Hopper Architecture With HBM3 Memory

NVIDIA announced the HGX H200, a new hardware-based artificial intelligence computing platform. Built on NVIDIA Hopper architecture, the device features an H200 Tensor Core GPU.

The NVIDIA H200 is the first GPU to feature HBM3e memory, which is faster than regular HBM3. NVIDIA H200 received 141 GB of HBM3e memory with a speed of 4.8 TB / s, which is almost twice as much in volume and 2.4 times as much in bandwidth compared to the memory of the previous generation NVIDIA A100 accelerator. For comparison, the H100 has 80GB of HBM3 at 3.35TB/s, while AMD’s upcoming Instinct MI300X will feature 192GB of HBM3 at 5.2TB/s.

Due to the memory upgrade, the H200 will provide a significant increase in performance in the work of already trained artificial intelligence systems. For example, NVIDIA promises an increase in the speed of the large language model Llama 2 with 70 billion parameters by 1.9 times compared to the H100. And the new product will speed up the work of a trained GPT-3 model with 175 billion parameters by 1.6 times.

H200 accelerators can be deployed in any data center: local, cloud, hybrid, edge.  NVIDIA’s global ecosystem of partner manufacturers, including ASRock Rack, ASUS, Dell Technologies, Eviden, GIGABYTE, Hewlett Packard Enterprise, Ingrasys, Lenovo, QCT, Supermicro, Wistron and Wiwynn, can upgrade their existing systems with the H200.  Amazon Web Services, Google Cloud, Microsoft Azure and Oracle Cloud Infrastructure will be among the first cloud service providers to deploy H200-based stations starting next year.

Latest articles

Call of Duty: Black Ops 6 Officially Unveiled: First Shown At Xbox Games Showcase

As reported by Video Games Chronicle, an advertisement for the upcoming Activision and Treyarch release appeared in today's issue of USA Today,...

Poco Pad Presented: The Brand’s First Tablet

The Poco brand has unveiled its debut tablet, Poco Pad, with a 12.1-inch high-resolution screen. The new product is distinguished by its...

Oppo Introduced Reno12 And Reno12 Pro

Oppo has announced the addition of two new models to its Reno series of smartphones - Reno12 and Reno12 Pro. The new...

Kia EV3 E-car With ChatGPT-Based Voice Assistant Presented

The new Kia EV3 was presented in South Korea. The brand’s smallest and most affordable electric crossover will go on sale in...

Related articles

Leave a reply

Please enter your comment!
Please enter your name here