Tweet This! :)

Friday, November 15, 2024

Accelerating the future: supercomputing, AI, part one

© Mark Ollig


Intel Corp. (INTC), founded in 1968, has been the major semiconductor sector representative on the Dow Jones Industrial Average (DJIA) since 1999, that is, until this month.

The DJIA, established May 26, 1896, with 12 companies, is today an index tracking the stock performance of 30 major companies traded on the US stock exchange.

In 1969, Intel partnered with Busicom, a Japanese calculator manufacturer founded in 1948, to develop a custom integrated circuit (IC) for its calculators.

This contract led to the creation of the Intel 4004, the first commercial microprocessor to be used in 1971 for Busicom’s 141-PF calculator; the 4004 combined the central processing unit (CPU), memory, and input and output controls on a single chip.

By integrating transistors, resistors, and capacitors, ICs like the Intel 4004 revolutionized the electronics industry, enabling the design of smaller, more powerful devices. Intel designs and manufactures ICs, including microprocessors, which form the core of a CPU.

Today’s Intel CPUs are supplied to computer manufacturers, who integrate them into laptops, desktops, and data centers for cloud computing and storage.

The Dow Jones recently replaced Intel with NVIDIA (NVDA); a technology company founded in 1993.

“NVIDIA” originates from the Latin word “invidia,” meaning “envy.” The company’s founders reportedly chose this name with the aim of creating products that would be the envy of the tech industry (interestingly, their green logo hints at this, symbolizing the ‘green with envy’ competitors).

NVIDIA has outpaced Intel in the area of artificial intelligence (AI) hardware; however, Intel has strengthened its AI capabilities with their 13th Gen Core Ultra processor series and Gaudi 3.

Gaudi 3 is a data center-focused AI accelerator designed for specialized workloads such as processing AI data in large-scale computing centers and AI high-level language training.

The DJIA has clearly recognized this shift, highlighting NVIDIA’s growing influence in AI via its graphics processing units (GPU) technology.

The GPU functions as the computing muscle executing tasks, specifically complex calculations and graphic processing necessary for graphics-heavy applications.

Mostly known for designing and manufacturing high-performance GPUs, NVIDIA released its first GPU, NV1, in 1995, and the RIVA 128 in 1997, and the GeForce 256, in 1999.

GPUs are processors engineered for parallel computation, excelling in tasks that require simultaneous processing, such as rendering complex graphics in video games and editing high-resolution videos.

Their architecture also makes them especially suited for AI and machine learning applications, where their ability to rapidly process large datasets substantially reduces the time required for AI model training.

Originally focused on graphics rendering, GPUs have evolved to become essential for a wide range of applications, including creative production, edge computing, data analysis, and scientific computing.

NVIDIA’s RTX 4070 Super GPU, priced between $500 to $600, targets mainstream gamers and content creators seeking 4K resolution.

For more complex workloads, the RTX 4070 Ti Super GPU, priced at around $800, offers higher performance for computing 3D modeling, simulation, and analysis for engineers and other professionals.

Financial experts use GPU technology for analyzing data used in monetary predictions and risk assessments to make better decisions.

Elon Musk launched his new artificial intelligence company, “xAI,” March 9, 2023, with the stated mission to “understand the universe through advanced AI technology.”

The company’s website () announced the release of Grok-2, the latest version of their AI language model, featuring “state-of-the-art reasoning capabilities.”

Grok is reportedly named after a Martian word in Robert A. Heinlein’s science fiction novel “Stranger in a Strange Land,” implying a deep, intuitive understanding – at least he did not name it HAL 9000 (2001 A Space Odyssey), or the M-5 multi-tronic computing system (Star Trek).

Colossus employs NVIDIA’s Spectrum-X Ethernet networking platform, providing 32 petabits per second (Pbps) of bandwidth to handle the data flows necessary for training AI large language models.

It also uses 100,000 NVIDIA H100 GPUs (the H is dedicated to Grace Hopper, a pioneering computer scientist), with plans to expand to 200,000 GPUs, which includes the new NVIDIA H200 models.

The NVIDIA H100 GPU has 80 GB of HBM3 high-bandwidth memory and 3.35 terabytes per second (TB/s) of bandwidth.

The NVIDIA H200 GPU, released in the second quarter of 2024, has 141 GB of HBM3e (extended) memory and 4.8 TB/s bandwidth.

It provides better performance than the H100 in some AI tasks, with up to a 45% increase, has two times faster inference speeds (measures how quickly an AI processes new information and generates results) on large language models, and uses less energy than the H100.

Both the H100 and H200 GPUs are based on the Hopper architecture.

Be sure to read next week’s always-exciting Bits and Bytes for part two.


NVIDIA HGX H200 141GB 700W 8-GPU Board













Elon Musk’s ‘Colossus’ AI training system
 with 100,000 Nvidia chips
(GPU module in the foreground)