Artificial intelligence is a small but important part of TSMC's business

Artificial intelligence is a small but important part of TSMC’s business

Given the exorbitant demand for compute and networking to run Ai workloads, and the dominance of Taiwan Semiconductor Manufacturing Co in manufacturing the compute engine chips and supplying the complex packaging for them, one would think that the world’s largest foundry would make big bucks in the second quarter.

But sadly, even with sales of a booming AI system, that market is still small enough and the amount of money TSMC gets from chip etching and packaging is small compared to their market prices that when the smartphone, PC, and general-purpose server markets all take a plunge at the same time, AI compute engine sales aren’t enough to fill the gap.

Thus, in fact:

Remember: When TSMC says HPC means engraving high-performance chips used in PCs, servers, storage, and switches, it doesn’t just mean HPC simulation and modeling. AI is in that black line above, and you know it’s because the black line has grown like crazy over the past few years and hasn’t fallen off a cliff compared to smartphones, which are clearly struggling over the past couple of quarters. Over the past five years, smartphones have driven about half of TSMC’s revenues, but the HPC segment (as defined above) has become dominant over the past two quarters and is expected to be for the foreseeable future.

In a conference call with Wall Street analysts, CC Wei, the foundry’s vice president and chief executive officer, said manufacturing custom GPUs, CPUs, and ASIC compute engines for AI inference and training accounts for just 6% of its total revenue, which totaled $15.68 billion, meaning AI products generated just $941 million in revenue. You can probably attribute some of the production of InfiniBand ASICs and Ethernet to AI on top of this number, but you get the idea. Companies like Nvidia are earning far more AI coins than TSMC, and this shows that while manufacturing and packaging are important, design and sales generate far more revenue and, presumably, far more profit.

Translation: It’s great to be ASML, it’s great to be TSMC, but it’s absolutely great to be Nvidia.

The recent surge in AI-related demand is directionally positive for TSMC, Wei said on the call with Wall Street analysts. Generative AI requires higher computing power and interconnect bandwidth, which drive the increase in semiconductor content. Whether you use CPUs, GPUs, AI accelerators, and related ASICs for AI and machine learning, the commonality is that it requires the use of cutting-edge technology and a robust foundry design ecosystem. These are all strengths of TSMC.

Looking ahead, Wei said TSMC expects artificial intelligence in its many forms to fall short of its revenues over the next five years, with a compound annual growth rate close to 50%. Depending on where you think TSMC’s revenues will be in five years, AI will generate perhaps $3.7B in revenue in 2023 (assuming a pretty good second half for AI and a slightly increasing growth rate), and a CAGR of 48.5% (which we chose arbitrarily) puts it at around $18B. If that represents low teens in overall revenue, call it 13 percent for fun, TSMC will be somewhere in excess of $135 billion in overall revenue in 2027 according to this model.

TSMC therefore expects to double revenues between now and 2027, and AI revenues will increase nearly 5x. But artificial intelligence is not the only one, or the biggestengine of revenue growth. The desire to put more chips into more things with more advanced processes to deliver the best performance per watt with PCs, smartphones, tablets, IoT devices, automotive chips and systems, and a host of edge and embedded devices is what’s driving this growth, and AI is just one example of that.

Isn’t math fun?

In the quarter ended June, TSMC’s net income fell 26.5% to $5.93 billion against $15.68 billion in sales, a decline of 13.7%. The collapse of the PC and smartphone industry is taking longer than expected, and the economic recovery in China that TSMC and others were banking on has not materialized. And so the decline in sales. Costs to develop and ramp up the 3-nanometer and 2-nanometer processes, the former which is now ramping up for shipments later this year, are rising, which is also putting pressure on profits. And continued investment in capital equipment, including a very expensive factory TSMC is building in Arizona, has forced TSMC to burn $4.38 billion of its treasury running the company and make $8.17 billion in capital expenditures for new foundries and equipment. That said, capital spending was up 10.4 percent from the year-ago period, and the company is still on track to spend about $34 billion in capital spending this year, which is still an extraordinary amount of money.

As Wei likes to remind everyone, TSMC has many years to go before a technology trend he hopes will materialize. As it grows alongside semiconductor demand, the bets continue to grow, but not in relation to the total addressable semiconductor market. What TSMC is doing is extremely difficult, and it’s really the only one that is delivering advanced processes and packages in the volumes and technology nodes of the volume. Intel still has a long way to go, and Samsung is very picky about the kind of processing it runs through its fabs, which are primarily memory-oriented. Those are pretty much the only options out there right now, unless you’re a Chinese company, in which case you have to rely increasingly on Semiconductor Manufacturing International Corp, the indigenous foundry that can’t get its hands on the extreme ultraviolet light (EUV) technologies needed to go down to 5 nanometers and below.

Right now, TSMC is cautious about what the second half of 2023 will look like, given it missed how long the inventory burndown and recovery in China would take, but says it expects sales to fall about 10 percent this year, which would bring it to $68.3 billion for the year. Given the costs of building foundries in the US, Japan and Europe and the 3-nanometer ramp, profits will most likely come under similar pressure and TSMC could see the kind of decline in net income (about 2x the rate of decline in revenue) that it saw in the second quarter.

No matter how much pressure TSMC is under, it’s nothing like the Jovian gravity Intel is under. And it’s still making money and currently has $47.86 billion in cash left over.

What TSMC doesn’t have are foundries churning out chips at full speed, and that’s how a foundry makes a profit is the only way That Anyone the foundry makes money. One reason sales in its HPC business aren’t higher is that they are driven by the Chip on Wafer on Substrate (CoWoS) back-end technology that many compute engine makers use to build their devices; some use chiplets, others use monolithic chips or chiplets in combination with HBM stacked DRAM memory. Wei was vague, but said TSMC is at least doubling its CoWoS capacity this year, but reminded everyone that this takes time.

The chip foundry itself is not a bottleneck, but the packaging is. AMD has been limited by substrate availability for the past couple of years for its Epyc server CPUs, and CoWoS is a limiter for GPUs and FPGAs. Going forward, all chip manufacturers will need to be very careful to align both wafer etching and packaging.

What’s immediately obvious is that TSMC has ample capacity for churning out 7-nanometer chips and plenty of capacity for 5-nanometer chips. And these will be long-lived nodes, just like the 3-nanometer processes that are currently ramping up will be. We don’t think traditional CPU and GPU companies are locking their 5-nanometer designs to intersect with 3-nanometer processes, but given the state of the global economy, perhaps they’re stretching out a bit to invest heavily in 3-nanometer technologies. More than a few AI ASIC manufacturers are sitting on machine generations that are a few years old and not talking about compute engine upgrades, for example.

Significantly, with the jump from 3 nanometers to 2 nanometers, there is not as large a jump in performance or density as from 7 nanometers to 5 nanometers. But importantly, there is a much greater increase in energy efficiency, which everyone is looking for in devices large and small.

Wei provided an update on his 3-nanometer and 2-nanometer ramps, as he has done for the past few quarters. The N3 and N3E processes, it boasted, will lead the industry in power, performance, area and the underlying transistor architecture.

N3 is already involved in production with a good yield, Wei said. We are seeing strong demand for N3 and expect a strong increase in N3 in the second half of this year, supported by both HPC and smartphone applications. N3 is expected to continue to contribute a mid-single-digit percentage of our total wafer revenue in 2023. N3E further extends our N3 family with improved performance, power, and throughput, and provides comprehensive platform support for HPC and smartphone applications. N3E has passed qualification and met performance and yield targets and will begin volume production in the fourth quarter of this year.

Wei added that the N2 development, using nanosheet transistors comparable to Intel’s 18A process, is on track for mass production in 2025 and will provide a full node performance boost and power derating, as well as a new feature called the backside power rail, which will provide a 10% to 12% speed boost and a 10% to 15% boost over and beyond node shrinkage for chips etched using the N2 process. The rear power rail feature will be available in the second half of 2025 and will be in production for customers in 2026.

And despite what Intel claims about transistor supremacy in 2025 with its 18A process, Wei said N2 will be best in the West or the East.

What the industry needs are two foundries around the 2-nanometer node, and hopefully they’re neck-and-neck and producing well.

#Artificial #intelligence #small #important #part #TSMCs #business
Image Source : www.nextplatform.com

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *