Key Takeaways
- Musk announced TERAFAB — a joint Tesla/SpaceX/xAI chip plant in Austin, Texas — in March 2026, with Intel Foundry Services (14A process) confirmed as manufacturing partner in April.
- Tesla's 2026 capital expenditure budget jumped to $25 billion, three times its 2025 spend, with chip design and a new semiconductor research fab in Austin as explicit line items.
- Tesla quietly acquired an unnamed AI hardware company for up to $2 billion in stock — buried in Note 14 of its Q1 2026 10-Q, never mentioned on the earnings call.
- If TERAFAB delivers, it changes the economics of Optimus robots and FSD inference at a scale where buying Nvidia chips would otherwise become prohibitive.
- This is Part 1 of a seven-part series on the integrated system behind Musk's ambitions. The chip layer is where it starts.
Elon's Interplanetary Stack — Series Overview
This is Part 1 of seven. Each part covers one layer of the integrated system Musk is building — from chips to robots to compute to launch to energy to Mars. The parts build on each other, but each stands alone.
- Part 1 (You are here): TERAFAB — the silicon foundation
- Part 2: Tesla Beyond Cars — Optimus robots and the FSD 2026 roadmap
- Part 3: xAI Colossus 2.0 — the world's largest AI supercomputer
- Part 4: Starship 2026 — SpaceX's 10-million-ton-per-year launch revolution
- Part 5: Why AI, robots, and SpaceX will overwhelm America's power grid
- Part 6: Elon's Mars vision — how robots, AI, and Starship make it real
- Part 7: Bold vision or impossible dream? A 2026 reality check
Tesla buried a $2 billion acquisition in a footnote. The company never mentioned it on its Q1 2026 earnings call. It didn't appear in the shareholders' letter. According to Electrek's Fred Lambert, the disclosure occupied exactly one sentence in Note 14 — Subsequent Events — the very last section of the financial statements. An unnamed AI hardware company. Up to $2 billion in stock and equity awards. No fanfare.
That kind of disclosure tells you something. When a company is excited about what it bought, it leads the call with it. When it buries a $2 billion acquisition in a legal footnote and moves on, it means the acquisition is a piece of a much larger plan — one they're not ready to explain yet. That plan has a name: TERAFAB.
What is TERAFAB and what is Tesla actually building in Austin?
TERAFAB is a joint chip factory in Austin, co-run by Tesla, SpaceX, and xAI, with Intel Foundry Services (14A process) as manufacturing partner. Musk announced it in March 2026.
On March 23, 2026, Elon Musk announced that Tesla, SpaceX, and xAI would jointly operate TERAFAB — a chip plant in Austin. The Verge's Andrew J. Hawkins reported two weeks later that Intel Foundry Services is the manufacturing partner, using its 14A process node. The production structure is sequential: Tesla runs the pilot line first, then SpaceX takes over for high-volume scaling once the process is qualified. That separates chip design (the three Musk companies) from manufacturing execution (Intel's fabs), while preserving flexibility to scale the fab's mission as demand grows.
The TERAFAB name follows Musk's established pattern of "tera"-scale branding — terawatt, teraflop, terabyte. At its core, it signals ambition at a scale that makes existing infrastructure look inadequate. Whether the fab itself reaches terawatt compute density or the name is aspirational framing, the goal is clear: Tesla wants to control its own silicon supply chain, end-to-end.
This isn't Tesla's first attempt at custom chips. The company has designed its own inference hardware since the HW3 computer in 2019 and has iterated steadily since. But designing a chip and manufacturing a chip are different businesses. Until now, Tesla designed internally and outsourced manufacturing to foundries. TERAFAB is the move to bring manufacturing decision-making in-house — or at least to operate as a principal in the fab relationship, not just a customer.
Tesla's Q1 2026 earnings call confirmed the investment is real and growing. Tesla's 2026 capital expenditure plan is $25 billion — up from $8.5 billion in 2025, $11.3 billion in 2024, and $8.9 billion in 2023, according to TechCrunch's Kirsten Korosec. That is three times the historical budget in a single year. The capex plan explicitly includes chip design and a new semiconductor research fab in Austin. This is not a side project. It's a strategic priority receiving a significant portion of the largest capital budget Tesla has ever deployed.
Why does Tesla want to build its own AI chips when Nvidia exists?
Custom silicon cuts inference costs, removes supply dependency, and enables chip designs optimized for robotics — none of which Nvidia can offer an automaker at Tesla's scale and specificity.
The short answer is unit economics. Tesla expects to produce Optimus robots at scale — production begins at Fremont in late July or August 2026, per Electrek. If each robot requires continuous AI inference, and if Tesla is targeting a $20,000-or-below retail price at volume, the chip cost in each unit matters enormously. Nvidia's data center GPUs — even purpose-built inference chips — run hundreds to thousands of dollars per unit at current pricing. At the robot volumes Musk is describing, buying merchant silicon makes the economics unworkable.
The longer answer involves control. Tesla announced HW4 Plus on its Q1 2026 earnings call — an "AI4.1" upgrade that doubles the RAM in each FSD chip from 16 gigabytes to 32 gigabytes, bringing the total system memory to 64 gigabytes. On that same call, Musk confirmed that HW3 — the hardware installed in millions of consumer vehicles — "simply does not have the capability" for unsupervised Full Self-Driving. That's an admission that Tesla's own chip requirements are escalating faster than it can update its vehicle fleet. If Tesla is designing chips on a roughly two-year iteration cycle, and if the compute requirements keep climbing, the only way to stay ahead of the curve is to control the design-manufacturing pipeline entirely.
| Factor | Buy from Nvidia (status quo) | TERAFAB (Tesla custom silicon) |
|---|---|---|
| Unit cost at 1M+ robot scale | High — merchant chip pricing | Lower — amortized fab investment |
| Supply chain control | Dependent on Nvidia allocations | Internal or preferential access |
| Chip spec alignment | General-purpose; not optimized for robots or FSD | Purpose-built for inference + robotics workloads |
| Design iteration speed | Constrained by Nvidia roadmap | Internal roadmap; can accelerate on Tesla's schedule |
| Capital requirement | Low upfront; ongoing per-unit cost | High upfront ($25B+ capex); lower long-run unit cost |
| Timeline risk | None — chips available today | High — fab build and yield ramp takes years |
Apple made this transition with its A-series processors starting in 2010 and completed it with the M-series migration in 2020. Google made it with Tensor Processing Units starting in 2016. Both companies achieved significant performance-per-watt gains and competitive insulation by owning their silicon. The trade-off is capital, time, and engineering talent — all of which Tesla now says it is willing to commit. At $44.7 billion in cash and equivalents at the end of Q1 2026, Tesla has the balance sheet to absorb the risk.
What is terawatt compute and why does the scale matter?
Terawatt compute means AI processing power at the scale where training large models and running inference across millions of robots becomes physically feasible.
The TERAFAB name, like xAI's Colossus supercomputer in Memphis, reflects the scale of compute Musk believes his companies will need. Colossus is one of the largest AI training clusters in the world — running workloads for Grok and the models that will eventually run in Optimus. TERAFAB is the supply-side answer to the same question: if xAI needs this much compute to train models, and those models need to run in robots and vehicles at scale, where does the inference silicon come from?
The connection between Tesla's chip factory and SpaceX is not accidental. SpaceX's Starlink constellation and future space-based infrastructure will require radiation-hardened, power-efficient custom chips that no merchant foundry is prioritizing. By building a joint fab, all three companies can share development costs, split production capacity, and design for workloads that span robot inference, FSD compute, and orbital hardware. The vertical integration reasoning applies to space as much as it does to cars and robots.
How does Tesla's approach compare to what Apple and Google did?
Apple and Google both built custom chips to cut dependency on third-party silicon. Tesla is making the same move — at larger scale, with manufacturing included.
Apple's transition is the cleaner analogy. When Apple introduced the M1 chip in November 2020, it was the result of more than a decade of A-series chip design experience. The company didn't try to build a fab. It contracted TSMC for manufacturing while controlling the design architecture entirely. Tesla appears to be making a similar design-first move, but with Intel Foundry Services replacing TSMC as the fab partner, and with the long-term goal of owning more of the stack.
The differences matter, too. Apple was designing chips for a single product category — consumer devices — with stable, predictable workloads. Tesla is designing chips for at least three distinct workload categories: vehicle FSD inference, humanoid robot compute, and space-grade hardware. Each has different thermal, power, and radiation requirements. Building a single fab that serves all three is a harder problem than Apple faced. It may also be a more defensible moat if Tesla solves it.
Google's TPU program offers a slightly different lesson. Google never publicly shared TPU unit costs, but the performance gap between TPU-based inference and equivalent Nvidia hardware became the template for custom silicon strategy across the industry. Tesla's FSD inference requirements are a close analogue to the large-scale inference workloads Google optimized TPUs for. If Tesla achieves similar efficiency gains, the implications for autonomous driving economics are significant — especially as robotaxi unit economics start to matter.
What are the real risks — and what could go wrong with TERAFAB?
Building a chip fab takes years and demands process engineering that has tripped up companies far larger than Tesla. Intel's own foundry transition is still unfinished.
The most important risk is the one the TERAFAB name obscures: semiconductor fabrication is extraordinarily hard. TSMC spent decades building the process engineering expertise that allows it to produce chips at the leading edge. Intel, despite having operated its own fabs for sixty years, has struggled to execute its process roadmap consistently — which is part of why Tesla is working with Intel Foundry Services rather than, say, TSMC or Samsung. The Intel partnership may be a pragmatic choice given capacity availability and geopolitical considerations, or it may reflect that better foundry options aren't available to Tesla at this stage.
There is also Musk's track record on timelines. Optimus's volume production timeline has shifted multiple times. FSD unsupervised has been "coming soon" for four years. Starship's first fully successful flight is still pending. TERAFAB will almost certainly experience delays. The question is whether the strategic direction is correct — and whether Tesla can sustain the capex commitment through the multi-year development cycle without competitive alternatives eating into its position. Tesla's CFO has already warned that the company will have negative free cash flow for the rest of 2026.
The unnamed $2 billion AI hardware acquisition complicates the picture in an interesting way. We don't know what Tesla bought. It could be chip IP, a team, process technology, or tooling that accelerates TERAFAB's timeline. It could also be entirely unrelated to TERAFAB — a sensor company or a software play. The fact that Tesla didn't disclose the name or purpose suggests the asset is either too early-stage to discuss publicly or too strategically sensitive to telegraph to competitors.
TERAFAB Scale Visualized
| Metric | TERAFAB Target | Context |
|---|---|---|
| Compute output* | 1 terawatt/year | Approximately 50× current global AI chip output estimates |
| Facility footprint* | ~100 million sq ft | Larger than TSMC's largest fab complex in Taiwan |
| 2026 Tesla capex (confirmed) | $25 billion | 3× Tesla's 2025 spend of $8.5 billion |
| Intel process node (confirmed) | 14A | Intel's most advanced node; currently in production ramp |
* Musk-stated targets, not independently verified. Tesla's Austin Giga Texas expansion is expected to anchor TERAFAB's pilot production line, blending automotive-scale manufacturing with semiconductor process technology.
Nexairi Analysis: Why the Silicon Layer Comes First
The series framing matters here. I chose to start with chips, not robots or rockets, because chips are where economic leverage in AI concentrates. Whoever controls the inference silicon in a robot-and-self-driving world holds the same position Intel held in PCs in the 1990s — not the most visible player, but the one extracting value from every unit shipped.
Tesla's bet is that it can escape that dependency before the robotics market reaches volume. The $25 billion capex commitment in a single year — while the company is still generating most of its revenue from selling cars — is a signal of how seriously Musk takes this window. If you believe Optimus will ship at scale within two to three years, then owning the chip supply chain has to start now. A fab takes two to four years to build and qualify. The math works, barely, if everything goes right.
The Intel partnership is the most interesting open variable. Intel Foundry Services is betting its corporate survival on winning customers like Tesla — and the TERAFAB contract, using Intel's 14A process node, is exactly the kind of anchor customer Intel needs to justify its foundry investment. That creates alignment — but it also means Tesla is riding a partner that is itself in a high-stakes transition. If Intel's execution on 14A improves significantly over the next 18 months, TERAFAB becomes more viable. If Intel continues to struggle with yield, Tesla may need to find alternatives or face delays its internal chip roadmap can't absorb.
My read: the direction is correct. The timeline is optimistic. The execution is the only question that matters.
What does TERAFAB mean for the rest of Elon's stack?
Every layer above TERAFAB — robots, AI compute, launch infrastructure, and eventually Mars — depends on silicon that hits the right cost and supply targets at scale.
Think of TERAFAB as the foundation layer of a much larger structure. In the weeks ahead, this series will cover the robots those chips will power (Part 2), the supercomputer where the AI running those robots is trained (Part 3), the launch system that eventually delivers hardware off-planet (Part 4), the energy infrastructure that powers all of it (Part 5), the Mars endgame (Part 6), and finally a full audit of whether any of this is actually achievable on Musk's timeline (Part 7).
Each layer depends on the one below it. Without cheap, capable, controlled silicon — the kind you can only get by owning or co-owning the fab — the economics of robots at scale don't work. Without robots at scale, the labor economics for building Mars infrastructure don't work. TERAFAB is not a chip factory story. It's the first sentence of a much longer argument.
In January 2026, Tesla invested $2 billion in xAI specifically to enable AI deployment "in the physical world at scale," per The Verge. In Q1 2026, Tesla also acquired $2 billion in SpaceX equity — two separate cross-company moves made within the same quarter, documented in Tesla's own filings. SpaceX bought more than 18% of all Cybertrucks registered in the US in Q4 2025, per Bloomberg data cited by The Verge. These are companies acting as divisions of a larger system, not independent entities making arm's-length decisions. TERAFAB is the clearest architectural expression of that integration to date: one chip factory, three companies, one long-term goal.
Whether TERAFAB delivers on its promise is genuinely unknown. What's not unknown is the intent. Musk buried a $2 billion acquisition in a footnote and moved on without explaining it. That's what a man building something too large to fit in a single earnings call looks like.
Next week: The robots TERAFAB was built to power — Optimus and the FSD 2026 roadmap. Part 2 publishes Thursday, April 30.
Sources
- The Verge — Elon Musk says he's planning to open a "Terafab" chip plant in Austin, Texas, jointly run by Tesla and SpaceX (March 23, 2026)
- The Verge — Intel will help build Elon Musk's Terafab AI chip factory (April 7, 2026)
- TechCrunch — Tesla just increased its spending plan to $25B — here's where the money is going (April 22, 2026)
- TechCrunch — Tesla Q1 revenue rises, driven by EV sales and FSD subscriptions (April 22, 2026)
- Electrek — Tesla (TSLA) quietly discloses $2 billion AI hardware company acquisition buried in filing (April 23, 2026)
- Electrek — Tesla announces HW4 Plus with doubled memory (April 23, 2026)
- The Verge — Tesla invested $2 billion in xAI to enable AI in the physical world (January 28, 2026)
- SEC EDGAR — Tesla Q1 2026 10-Q (Note 14: Subsequent Events)
- The Verge — SpaceX bought 18% of all Cybertrucks registered in Q4 2025 (April 16, 2026)
Related Articles on Nexairi
Fact-checked by Jim Smart
