Everyone’s talking about Nvidia’s new chips. But there’s a quieter, arguably more important war happening underneath all the GTC 2026 keynote spectacle — and it’s being fought by two Korean companies most people can’t tell apart.
Samsung Electronics and SK hynix are locked in an increasingly fierce battle to supply the memory chips that make AI possible. Without high-bandwidth memory (HBM), Nvidia’s fancy GPUs are just expensive paperweights. And at GTC 2026 this week, both companies showed up swinging.
The Fuel Line Problem Nobody Talks About
High-bandwidth memory is the unsung hero of the AI revolution. While GPUs get all the headlines, HBM sits right next to the processor, feeding it data at extraordinary speeds. The GPU is the engine, but HBM is the fuel line. A bigger engine doesn’t help if you can’t pump fuel fast enough.
Every time you use ChatGPT, generate an image, or interact with any AI system, HBM chips are shuttling massive amounts of data to the processors doing the actual computation. As models get larger and inference workloads explode — Nvidia just projected a $1 trillion AI chip market through 2027 — demand for faster, higher-capacity HBM is skyrocketing.
The HBM market was valued at roughly $5.6 billion in 2024. Analysts now project it could hit anywhere from $70 billion to over $200 billion within the next decade. That’s not incremental growth. That’s an industry being completely remade.
Samsung Swings for the Fences With HBM4E
The centerpiece of Samsung’s GTC showing was the first-ever physical display of its seventh-generation HBM4E chip. The numbers are genuinely impressive: 16 gigabits per second per pin and 4.0 terabytes per second of total bandwidth. That’s a significant jump from current HBM4, which tops out around 13 Gbps per pin and 3.3 TB/s.
Samsung is already mass-producing sixth-generation HBM4, designed specifically for Nvidia’s upcoming Vera Rubin platform. Its HBM4 delivers 11.7 Gbps per pin — well above the 8 Gbps industry standard — built on cutting-edge 10-nanometer-class DRAM process.
The most interesting technical detail might be Samsung’s hybrid copper bonding technology. This method enables stacking 16 or more HBM layers while reducing heat resistance by over 20% compared to traditional thermal compression bonding. Heat management is one of the biggest engineering challenges in AI data centers, so this could prove to be a genuine differentiator.
Samsung also scored a win on the foundry side. Jensen Huang personally thanked Samsung during his keynote for manufacturing the Groq 3 LPU — the inference chip Nvidia acquired with its $17 billion Groq purchase in December. “I want to thank Samsung, who manufactures the Groq 3 LPU chip for us, and they’re cranking as hard as they can,” Huang said from the stage.
That’s not just politeness. It’s a signal that Samsung’s foundry business is becoming more tightly woven into Nvidia’s ecosystem.
SK Hynix: The Incumbent With Two-Thirds of the Pie
Here’s what Samsung doesn’t want you to focus on: SK hynix currently holds roughly two-thirds of Nvidia’s 2026 HBM4 allocation for the Vera Rubin platform. That’s a massive, commanding lead.
SK hynix brought serious executive firepower to GTC — both SK Group Chairman Chey Tae-won and SK hynix CEO Kwak Noh-jung attended. They showcased HBM4 and fifth-generation HBM3E products already shipping in Nvidia’s current GPU lineup, plus an enterprise SSD with liquid-cooling technology developed in collaboration with Nvidia.
The company also displayed SOCAMM2 memory modules and LPDDR5X memory integrated into Nvidia’s DGX Spark AI supercomputers. The message was clear: we’re not just a component supplier, we’re an architecture partner.
What makes SK hynix’s position particularly strong is the depth of its Nvidia relationship. Years of close engineering collaboration have created institutional knowledge and trust that’s hard to replicate, no matter how good your specs look on paper.
The Qualification Game
Neither company has completed Nvidia’s final qualification process for HBM4 products ahead of full Vera Rubin production. SK hynix has delivered final samples for verification. Samsung began shipping its first HBM4 for AI systems roughly four weeks before GTC.
Nvidia qualification is notoriously rigorous. It’s not enough to hit bandwidth numbers in a lab. Memory chips need to perform reliably across millions of hours, at specific power envelopes, at precise temperatures, with consistent yields. A chip that’s 5% faster but fails 1% more often is useless in a data center running 24/7.
Samsung has pledged to triple its HBM production capacity — an aggressive bet that signals confidence but also reveals the scale of the gap it needs to close.
Micron: The American Dark Horse
While the Korean giants dominate headlines, Micron Technology is quietly building its own AI memory position. Micron’s stock is up 325% year-over-year as of March 2026, driven by sold-out HBM3E production.
Micron doesn’t have the market share of its Korean rivals, but it benefits from something they don’t: U.S. government support through the CHIPS Act. As geopolitical tensions shape supply chain decisions, having a qualified American HBM supplier becomes strategically valuable.
SK Group’s chairman warned at GTC that wafer shortages will persist through 2030 as AI demand overwhelms supply. All three companies are reallocating cleanroom capacity from conventional memory to AI-specific products, with 2026 DRAM and NAND supply growth projected at just 16-17% year-on-year — well below historical norms.
Why This War Matters to Everyone
The ripple effects extend far beyond semiconductor earnings calls.
AI progress could slow down. If HBM production can’t keep pace with GPU production, it doesn’t matter how many Vera Rubin chips Nvidia designs — they can’t ship complete systems without memory. The wafer shortage projected through 2030 means this isn’t a short-term hiccup.
AI service prices are at stake. Memory accounts for a growing share of total AI server costs. A competitive, well-supplied HBM market keeps prices in check. A constrained one means higher costs passed to everyone paying for cloud AI services.
Geopolitics is a live wire. South Korea’s dominance in HBM manufacturing makes it a critical node in global AI supply chains — right alongside Taiwan’s TSMC for chip fabrication. Any disruption to Korean manufacturing would have immediate consequences for AI infrastructure worldwide.
From Components to Architecture
The most significant shift at GTC 2026 isn’t any single chip announcement — it’s the blurring line between memory companies and system architects.
Samsung pitched itself as “the industry’s only semiconductor company offering a total AI solution spanning memory, logic, foundry, and advanced packaging.” SK hynix positioned memory as a “core element that determines the architecture and performance of the entire AI infrastructure.”
The next generation of AI systems won’t be designed by GPU companies alone. They’ll be co-designed with memory companies, the entire stack optimized together. The winners of the HBM war won’t just be the companies that ship the most chips — they’ll be the ones most deeply embedded in how AI hardware gets designed from the ground up.
Jensen Huang sees a $1 trillion market ahead. Samsung and SK hynix are fighting to build the memory backbone of that future. The winner shapes not just semiconductor earnings reports, but the speed at which AI transforms everything it touches.
Sources: Samsung Global Newsroom, WinBuzzer, The Korea Herald, Reuters, SK Hynix Newsroom