SK hynix has introduced its most advanced high-performance memory technology to date — a 12-layer HBM4 (High Bandwidth Memory 4) stack boasting an unprecedented 2 terabytes per second (TB/s) of data throughput. The breakthrough was revealed at the Dell Technologies Forum 2025, marking a new milestone in memory innovation for artificial intelligence (AI) and high-performance computing (HPC).
Next-Level Memory for the AI Era
The newly unveiled HBM4 represents a significant leap in performance, efficiency, and data scalability, addressing one of the biggest bottlenecks in AI system design — memory bandwidth. With AI models growing exponentially in size and complexity, the need for faster, more energy-efficient data handling has become critical. SK hynix’s HBM4 architecture is purpose-built to meet that demand.
According to the company, the 12-layer design integrates ultra-dense DRAM modules connected through fine-pitched through-silicon vias (TSVs), enabling faster and more efficient data transfers. This architecture not only boosts speed but also reduces power consumption, a key consideration as AI workloads continue to consume massive energy resources.
Engineered for AI, Data Centers, and HPC
The HBM4 memory targets next-generation AI accelerators, cloud data centers, and high-performance computing systems. It offers scalable configurations that can adapt to a range of applications — from large language models (LLMs) and real-time analytics to autonomous systems and high-fidelity simulations.
Industry analysts attending the event noted that the announcement solidifies SK hynix’s position as a global leader in memory innovation, directly competing with other major players such as Samsung Electronics and Micron Technology in the race to develop high-performance, AI-optimized memory solutions.
Performance Meets Energy Efficiency
Executives from SK hynix emphasized that the HBM4’s development focused on optimizing performance per watt—a key metric as the AI industry faces growing scrutiny over energy use. The company confirmed that its future roadmap includes further advancements in stacking technology and process miniaturization to push data rates even higher in upcoming generations.
“We’re entering an era where computing power depends as much on memory innovation as on processing capability,” an SK hynix spokesperson said. “HBM4 will be a cornerstone for powering the next wave of AI-driven transformation.”
Shaping the Future of AI Infrastructure
The unveiling arrives at a time when global semiconductor demand is surging, driven by generative AI, autonomous systems, and data-intensive workloads. As chipmakers such as Intel, AMD, and Nvidia prepare next-generation processors, SK hynix’s HBM4 memory is expected to serve as a crucial enabler of performance breakthroughs across industries.
With its unmatched bandwidth and energy efficiency, SK hynix’s 12-layer HBM4 stands as a testament to the accelerating pace of memory innovation — one that will define the future of AI and high-performance computing for years to come.