February 2026 — SK hynix has announced a $15 billion investment aimed at expanding its next-generation memory manufacturing capacity, with a strong focus on High-Bandwidth Memory (HBM) and advanced DRAM technologies to support AI and high-performance computing demand.
The investment reinforces SK hynix’s position as a leading supplier of HBM used in AI accelerators and data center GPUs.
A significant portion of the $15B will be allocated to:
HBM demand is currently driven by AI accelerators from companies such as:
Each AI GPU can integrate 6–12 HBM stacks, dramatically increasing memory content per system compared to traditional server DRAM.
The investment also supports:
AI workloads are pushing DRAM bandwidth and density to new limits. HBM3E currently delivers:
Future HBM4 is expected to exceed 2 TB/s per stack, requiring tighter integration with advanced packaging ecosystems.
HBM production is tightly linked with advanced packaging technologies such as:
Packaging bottlenecks remain one of the primary constraints in AI hardware scaling. This investment indirectly supports supply chain stabilization for advanced packaging demand.
Unlike commodity DRAM cycles of the past, HBM carries:
SK hynix has reportedly secured multi-year HBM supply agreements with major AI GPU vendors, strengthening its revenue visibility.
Key competitors include:
However, SK hynix currently holds a leading share in HBM supply for AI accelerators, giving it a structural advantage during this AI infrastructure expansion cycle.
SK hynix’s $15B investment underscores a structural shift in the memory industry: AI is redefining demand patterns, margins, and technology roadmaps.
HBM is no longer a niche product — it is now the backbone of AI accelerators. And memory suppliers that secure early hyperscaler alignment stand to capture disproportionate value in the AI compute cycle.