- US - English
- China - 简体中文
- India - English
- Japan - 日本語
- Malaysia - English
- Singapore - English
- Taiwan – 繁體中文
With AI data center solutions, our partners are shaping the future
The technology landscape is constantly evolving, and we are proud to work with some of the most innovative and influential partners in the industry. With them as the backbone of the ecosystem, we are delivering products that work together to do amazing things. As you’re likely aware, the data center market is undergoing a massive transformation, driven by the growing demand for AI applications and services. The information in this blog is all publicly available, so I’ve footnoted sources.
Our ecosystem partners are delivering stunning technology that drives, perhaps catapults, forward various AI workloads, such as automated medical diagnoses, self-driving vehicles, personalized entertainment recommendations, smart home management, weather forecasting, and for my last one … AI-generated art. Now who can live without that? So let’s dive in as I focus on data center processors and accelerators for AI.
What’s going on out there?
NVIDIA: If you haven’t been paying attention, NVIDIA has quickly become one of the most valuable companies in the world. In just the past year, its stock price has risen over 230%, and its market cap is just under US$2 trillion (as of market close on Feb. 26, 2024). NVIDIA is a pioneer in data center accelerators, offering multiple flavors of GPUs. The latest for the data center is the H100. The H100 can deliver up to nine times faster AI training and up to 30 times faster AI inference than the previous generation’s A1001 — and many previously thought the A100 was a highly performant GPU. We’ve done our own testing with the A100, and in one workload test we exhibited a greater than 100 times improvement in feature aggregation, often the longest part of AI training, using Big Accelerator Memory technology by NVIDIA. Need proof? Read our January 2024 blog about Micron 9400 NVMe SSDs.2
The H100 is a game-changer for generative AI, and NVIDIA reportedly has a significant backlog of supply, so good luck getting one. The H200 has been announced and promises even greater performance improvements.3 It's the first GPU to use HBM3E, and notably, Micron is a supplier.4 But don’t forget that NVIDIA is also building AI servers, branded as DGX. These servers deliver massive computing power, with the DGX H100 delivering an eye-popping 32 petaflops! If you don’t already know, NVIDIA hosts an annual conference called Global Technology Conference, or GTC for short. The next conference is in a couple weeks, and I’m sure a long list of next-generation developments will be announced. Be sure to watch Jensen Huang during his keynote on Monday, March 18, 1–3 p.m. PT.
Intel: Intel is, according to CEO Pat Gelsinger, on a “mission to bring AI everywhere” and going through a significant transformation to do just that. The company has reportedly signed a packaging deal with NVIDIA,5 is offering foundry services to external parties,6 and is streamlining operations to focus on its most important markets. Intel is renowned as an established leader in data center processors, offering a range of products that support AI workloads, such as the Xeon Scalable processors. And the company recently released its fifth-generation Xeon Emerald Rapids processors only one year after releasing its last generation of processors. Tickety-tick-tock, if you know what I mean? If you don’t, I mean that’s a fast transition from one processor to the next, and it’s also a play on Intel’s famous tick-tock strategy. 😊 Intel tells us that, for Emerald Rapids, every core has AI acceleration built in, helping them deliver up to 42% faster image segmentation and 24% higher performance on image classification, a marked improvement for AI inference.7
What’s next? Intel has already spoken publicly about Granite Rapids, which will have 2.9 times better DeepMD+LAMMPS AI inference.8
While it now has a small share in the accelerator and GPU segment, Intel also participates in that segment. In 2024, the company plans to launch its next-generation Gaudi®3 AI accelerator,8 the offspring of a 2019 acquisition of Habana Labs. It's expected to compete with NVIDIA’s H100 and AMD’s MI300X. Following that product, Intel plans to converge its accelerator and GPU lineups next year, with a product code-named Falcon Shores.8 Publicly available details are sparse, but next year promises to be an interesting one in the accelerator/GPU market segment.
AMD: AMD is another leader — some might say THE other leader — in data center processors, offering EPYC processors that deliver high performance, scalability, and security for AI applications. In AMD testing, the company shows significant gains in simulated AI workload tests generation to generation. In this blog,9 Raghu Nambiar, corporate vice president of AMD’s Datacenter Ecosystems and Solutions, provides several insights related to EPYC 4th Gen versus 3rd Gen. ResNet-50 results? Over three times the improvements. BERT-Large? Over four times the improvements. Yolo v5? Right, the real question is what does that acronym mean? You only look once. In that case, over 1.7 times the improvement. The family is optimized for a wide range of workloads and excels not only in general-purpose computing but also in AI inference. Little is known publicly about AMD’s AI roadmap, but CEO Lisa Su, says, “We are very excited about our opportunity in AI. This is our No. 1 strategic priority, and we are engaging deeply across our customer set to bring joint solutions to the market.”10
For AI training, servers equipped with AMD Instinct accelerators improve the process, ensuring efficient model parameter optimization. On the accelerator front, AMD threw off the gloves for its GPU rivalry with NVIDIA by directly comparing the recently announced AMD Instinct™ MI300 series to NVIDIA’s H100. Peak performance? 1.3 times better in teraflop performance for AI!11 In today’s data centers, it’s not enough to just be fast. You must be mindful of energy efficiency as well since power is expensive. Notably, AMD EPYC and/or AMD Instinct accelerator systems currently power eight of the top 10 most energy-efficient supercomputers globally.12
Ampere: Ampere is a newcomer in data center processors, which are the first cloud-native processors designed for AI and cloud workloads and built with Arm-based technologies. Its mainstream Ampere Altra processors feature up to 128 Arm-based cores and offer high performance, power efficiency and scalability for data center applications. In one example, Ampere shows that its Altra Max delivers 166% higher performance than Intel Ice Lake and AMD Milan for computer vision workloads.13 In another test for natural language processing (NLP), the Altra Max exhibits a 73% improvement over Ice Lake and 56% improvement over Milan.14
In May 2023, Ampere introduced its AmpereOne processors for cloud data centers that deliver an industry-leading 192 cores.15 The company also claims, “AmpereOne platforms are well suited to a variety of system configurations delivering the highest performance for large capacity storage, networking, AI inference, and the newest generative AI models and applications.”15 With the surge in AI demand, the company believes traditional GPUs can be overkill especially for inference, consuming excessive power and money.16 Resultantly, it offers a wide portfolio of power-efficient solutions. Ampere is not a player in the accelerator/GPU market.
Are there others in the game? You betcha! Many hyperscalers, for example, are custom-building their own processors and accelerators specifically for their workloads. This customization presumably gives them an advantage over others in the cloud market.
Collaboration is key!
We collaborate with these partners to make better products and ensure they perform together optimally. If you’re wondering why one silicon or accelerator provider can have amazing results while another one shows a different set of impressive results, you’re not alone. That’s because various design decisions are made for these technologies, and this situation often affects the results you get from specific workloads run on specific equipment.
So, like my dad used to say, “You need the right tool for the job.” We do extensive testing on these workloads to help you choose the right tool for the job, and we publish many of the results on Micron’s data center storage insights webpage.
Also, if you haven’t seen it already, we have a webpage dedicated to our work with ecosystem partners. This page shows through rigorous testing standards that we interoperate with them as a storage provider. Our partners are transforming the technology landscape with their breakthroughs in AI, and we are looking forward to seeing how their products influence the industry and our world. It’s going to be an exciting journey!
Check back soon as I plan to write about other developments in our ever-evolving ecosystem.
1 An order-of-magnitude leap for accelerated computing
2 Micron 9400 NVMe SSDs explore Big Accelerator Memory using NVIDIA technology
3 NVIDIA supercharges Hopper, the world’s leading AI computing platform
4 Micron commences volume production of industry-leading HBM3E solution to accelerate the growth of AI
5 NVIDIA advanced packaging chain Intel was listed in the TSMC order
6 Delivering a systems foundry for the AI era
7 New 5th Gen Intel Xeon processors are built with AI acceleration in every core
8 Intel advances scientific research and performance for new wave of supercomputers
9 4th Gen AMD EPYCTM processors deliver exceptional performance for AI workloads
10 AMD says AI is its No.1 strategic priority with Instinct MI300 leading the charge later this year
11 AMD Instinct™ MI300 series accelerators
13 Ampere AI efficiency: Efficient AI computer vision (CV) workloads
14 Ampere AI efficiency: Natural language processing