SK Hynix has decided to present its implementation of HBM4 to the public at the NA technological symposium of TSMC, alongside several other memory products.
SK Hynix HBM4 technology can now stack up to 16 layers; Mass production provided for H2 2025
Well, with regard to HBM manufacturers on the market, it seems that SK Hynix is ahead of all the others, in particular with its HBM4 technology. It is claimed that the company has already prepared a commercial version of the process, while competitors like Micron and Samsung are still in the sampling phase, which shows that, at least for the moment, SK Hynix wins the race. At the technological symposium of North America of TSMC, the company presented What he calls leadership of “AI memory” by revealing several new products discussed in advance.
First and foremost, SK Hynix gave the public an overview of its HBM4 process, also giving a slight overview of the specifications. Thus, we look at HBM4, which has a capacity of up to 48 GB, 2.0 TB / s of bandwidth and an E / S speed evaluated at 8.0 Gbit / s. SK Hynix has announced that they were looking for mass production by H2 2025, which means that the process could see integration into the products at the end of this year, which is incredible. It is important to note that the Korean giant is the only company that presented HBM4 to the public.

Alongside HBM4, we saw the implementation by SK Hynix from HBM3E to 16 layers, which is also the first of its kind, with 1.2 TB / s of bandwidth and much more. This particular standard would be integrated into the NVIDIA “ultra” ultra “bunches, while Nvidia plans to go to HBM4 With Vera Rubin. Interestingly, SK Hynix says they have managed to connect as many layers via MR-MUF and advanced TSV, and we are probably examining the pioneer of the technologies mentioned.

In addition to HBM, SK Hynix has also presented its range of server memory modules, including RDIMMS and MRDIMMS products. The high performance server modules are now under construction depending on the new Dram 1C standard, which has led to the speeds of up to 12500 MB / s, which is simply surprising.
In particular, SK Hynix presented a range of modules designed to improve the performance of the AI and the data center while reducing energy consumption. These include the Mrdimm range with a speed of 12.8 gigabits per second (GBPS) and the capacities of 64 GB, 96 GB and 256 GB; RDDIMM modules at a speed of 8 Gbit / s in 64 GB and 96 GB of capacities; and a 256 GB 3DS rdimm.
– SK Hynix
There is no doubt that SK Hynix currently has an advantage in the HBM and Dram markets, beating longtime players like Samsung mainly by driving innovation and partnerships with Nvidia.