Skip to main content

AMD Introduces ROCm 6.0 and Unveils the MI300X, MI300A

AMD has provided more details on its Instinct MI300 series at their AI event, which includes a data center APU and a CDNA3 discrete GPU accelerator. The company also announced ROCm 6.0, aimed at advancing AMD’s AI software capabilities.

The AMD Instinct MI300X accelerator is positioned as a strong alternative to NVIDIA for AI accelerations. It boasts more than double the amount of HBM3 memory compared to the H100 SXM. In terms of peak theoretical potential, the MI300X is expected to outperform the competition from NVIDIA. The total board power for the MI300X is 750 Watts, and up to eight MI300X accelerators can be combined into a single server, providing 1.5TB of HBM3 memory and potentially 1.3 times more compute power than the NVIDIA H100 HGX.

The numbers provided by AMD appear to be in good standing against NVIDIA’s H100 competition, although no independent testing has been conducted prior to the announcement.

The Instinct MI300A is an APU accelerator for AI and HPC that is also quite intriguing. It features Zen 4 CPU cores, AMD CDNA3 graphics, and 128GB of HBM3 unified memory. This combination offers significant potential in the data center, providing an alternative to the Xeon Max and Intel’s Falcon Shores APU.

The MI300A is equipped with 24 Zen 4 CPU cores, 256MB AMD infinity Cache, eight HBM3 stacks for approximately 5.3TB/s of memory bandwidth, and 228 AMD CDNA3 compute units. AMD’s benchmarks demonstrate impressive HPC and AI performance for ROCm workloads, leveraging the remarkable memory bandwidth and the combination of CDNA3 compute units and Zen 4 CPU cores.

Overall, the AMD Instinct MI300 series and ROCm 6.0 software offer exciting advancements in AI acceleration and data center capabilities. With its impressive specifications and performance, AMD aims to provide strong competition to NVIDIA in the AI market.

Source: Phoronix.