Meta and Broadcom 2nm MTIA chip deal

Meta and Broadcom's 2nm MTIA Deal: Zuckerberg Just Declared War on Nvidia's Inference Empire

If you’ve been waiting for the AI hardware story to stop being “Nvidia, Nvidia, and also Nvidia,” circle April 14, 2026. That’s when Meta and Broadcom went public with an expanded partnership to co-design multiple generations of Meta’s MTIA accelerators through 2029, anchored by a 1-gigawatt initial deployment and a path to multiple gigawatts after that. The headline spec: the first AI silicon built on TSMC’s 2nm process. This isn’t a routine vendor press release. It’s the loudest signal yet that the era of hyperscalers handing blank checks to Jensen is ending. ...

April 15, 2026 · 5 min · DBBS Tech
Japan's Rapidus 2nm AI chip moonshot

Japan Just Bet $16 Billion on a Chip Startup Nobody Thought Could Win

There’s a factory rising in the snow-covered plains of Hokkaido, Japan, and the government just bet another $4 billion that it can change the future of AI. On April 11, 2026, Japan’s Ministry of Economy, Trade and Industry (METI) approved ¥631.5 billion ($4 billion) in fresh subsidies for Rapidus — a semiconductor startup that most of the industry has politely called “ambitious” and privately called “impossible.” The new infusion brings total government backing to a staggering ¥2.6 trillion ($16.3 billion), making Rapidus one of the most heavily state-funded chip ventures in history. ...

April 12, 2026 · 6 min · DBBS Tech
DeepSeek V4 running on Huawei chips illustration

DeepSeek V4 Will Run Entirely on Huawei Chips — And That Changes Everything

The Nvidia Era Is Over — At Least in China DeepSeek’s next flagship model, V4, will run exclusively on Huawei Ascend chips. Not as a backup. Not as a proof of concept. As the entire inference stack. That’s not a press release talking point. That’s a tectonic shift. For years, China’s AI labs quietly depended on Nvidia silicon — H100s, A100s, whatever they could get their hands on through official channels or creative workarounds. That dependency is now ending, and it’s ending fast. ...

April 4, 2026 · 3 min · DBBS Tech
Huawei Ascend 950PR AI chip breaking through Nvidia's CUDA barrier

Huawei's Ascend 950PR Cracks Nvidia's CUDA Moat — and China's Tech Giants Are Lining Up

Nvidia’s deepest moat was never the silicon. It was CUDA — the software ecosystem that made every AI developer on Earth, including China’s, completely dependent on Nvidia’s way of doing things. You could build a faster chip, but if developers had to rewrite their entire codebase to use it? Dead on arrival. Huawei just found the side door. The Ascend 950PR, paired with Huawei’s overhauled CANN Next software stack, has reportedly won over ByteDance and Alibaba — two of China’s largest AI consumers. After years of Beijing practically begging its tech giants to go domestic, Huawei may have finally built a chip they actually want to use. ...

March 28, 2026 · 5 min · DBBS Tech
Alibaba XuanTie C950 RISC-V AI chip illustration

Alibaba's XuanTie C950: A RISC-V Chip Built for the AI Agent Era

Everyone’s fighting over GPUs. Alibaba just changed the question. On Tuesday, Alibaba’s DAMO Academy unveiled the XuanTie C950 — a 5-nanometer server processor built on open-source RISC-V architecture. It’s the highest-performing RISC-V CPU ever made. But the interesting part isn’t the benchmarks. It’s the thesis behind the chip: that AI agents need fundamentally different silicon than AI chatbots. While Nvidia, AMD, and Intel wage war over who can build the biggest parallel processor for training models, Alibaba is making a deliberate bet on what comes after training. And the logic is harder to dismiss than you’d think. ...

March 25, 2026 · 5 min · DBBS Tech
Arm AGI CPU data center chip for agentic AI

Arm Just Made Its Own Chip — And It's Coming for Intel, AMD, and the Entire Data Center

After 35 years as the Switzerland of semiconductors — licensing chip designs to anyone with a checkbook — Arm Holdings just crossed the Rubicon. It built its own chip. Not a demo. Not a reference design you’ll never see in production. A 136-core data center processor called the AGI CPU, fabricated on TSMC’s 3nm process, with Meta signed on as the debut customer. This isn’t incremental. This is tectonic. The Hardware: 136 Cores of Pure Intent The specs read like Arm had something to prove. ...

March 25, 2026 · 6 min · DBBS Tech
Abstract illustration of a massive chip fabrication facility with geometric patterns

Musk's $25 Billion Terafab: The Most Ambitious AI Chip Factory Ever — Or the Next Dojo

The lights shooting into the Austin sky on Saturday night weren’t aliens. They were Elon Musk doing what Elon Musk does best — staging an event so audacious that you can’t look away, even if you’re not sure you believe a word of it. Inside the defunct Seaholm Power Plant in downtown Austin on March 21, Musk officially launched Terafab — a joint venture between Tesla, SpaceX, and xAI to build what he calls “the most epic chip-building exercise in history by far.” The price tag: an estimated $20–25 billion. The goal: producing one terawatt of computing power per year, with 80% of it destined for space. ...

March 23, 2026 · 6 min · DBBS Tech
Abstract visualization of disaggregated AI inference architecture

AWS and Cerebras Are Ripping AI Inference Apart — On Purpose

The biggest bottleneck in AI isn’t training anymore. It’s inference — the moment a model actually does something useful. And AWS just partnered with Cerebras Systems to attack that bottleneck with an approach nobody has tried at this scale. The deal: Cerebras’ massive wafer-scale CS-3 chips will sit inside AWS data centers, accessible through Amazon Bedrock. The promise: 5x faster inference. The method: tearing the inference pipeline in half. Splitting the Brain Traditional AI inference runs both stages on the same GPU. You send a prompt, the chip processes it (prefill), then generates a response token by token (decode). One chip, both jobs. ...

March 22, 2026 · 4 min · DBBS Tech
Abstract illustration of AI chip smuggling and export controls

Hair Dryers and Dummy Servers: Inside the $2.5 Billion Nvidia Chip Smuggling Bust

Federal agents arrested Super Micro Computer co-founder Wally Liaw on Thursday for allegedly running a $2.5 billion scheme to smuggle Nvidia-powered AI servers to China. The playbook included dummy servers staged in warehouses, hair dryers to peel off serial numbers, and a bribed auditor who skipped inspections to enjoy paid entertainment. This is the biggest AI export control enforcement action in U.S. history. And it reads like a heist movie. ...

March 20, 2026 · 4 min · DBBS Tech
Abstract visualization of Nvidia's Vera Rubin AI chip architecture

Nvidia GTC 2026: Vera Rubin, a $1 Trillion Bet, and the Dawn of AI's Inference Era

Jensen Huang stood in front of 18,000 people at San Jose’s SAP Center on Monday, wearing his signature black leather jacket, and casually dropped a number that would make most Fortune 500 CEOs choke on their coffee: $1 trillion. That’s the revenue opportunity Nvidia now sees for its AI chips through 2027 — doubled from the $500 billion estimate it gave investors just last month. And after a nearly three-hour keynote that covered everything from space-based data centers to Disney robots to the future of gaming graphics, one thing is crystal clear: Nvidia isn’t just riding the AI wave anymore. It’s building the ocean. ...

March 18, 2026 · 5 min · DBBS Tech