It seems like AMD is trying to raise market expectations for its upcoming MI325X processors. These chips are designed specifically for data centers and can handle intensive AI workloads. According to a new report from Bloomberg, AMD CEO Lisa Su confidently claimed that these chips will outperform Nvidia's popular H100 processors.
One of the standout features of the MI325X SoC is the use of 256GB HBM3E memory, which increases the memory capacity by 1.8 times compared to its predecessor, the MI300X. In addition, the HBM3E memory has an impressive bandwidth of 6 terabytes per second (TB/s), which is important for processing large data sets and complex calculations related to AI tasks.
Nvidia’s H100 processor was released in 2022, so it “only” uses HBM3 memory instead of the newer HBM3E, which supports 3TB/s of bandwidth. However, the upcoming processor based on “Team Green’s” Blackwell architecture is also expected to use 288GB of HBM3E memory, which has a maximum bandwidth of around 13.8TB/s.

Demand for advanced SoCs has been on the rise in recent times, as major tech companies around the world look to train new and more advanced AI models. Last week, Nvidia and Foxconn announced a joint project to build the world's largest facility to produce the GB200 superchip based on the Blackwell architecture.
Nvidia has even started shipping samples of these chips to partners and is expected to generate billions of dollars in revenue by the end of the year. This is also why the company's stock has skyrocketed, becoming the second most valuable company in the world, behind only Apple.
On the other side, AMD is also trying to capture a piece of this growing market. “Team Red” has not announced when the new MI325X processors will be available, but mass production is expected to begin in late 2024. AMD says the AI chip market is expected to reach $500 billion by 2028.