How Nvidia, TSMC, Broadcom and Qualcomm will lead a trillion-dollar silicon boom – SiliconANGLE News
BREAKING ANALYSIS by
We believe the artificial intelligence wave will bring profound changes, not only to the technology industry but to society as a whole. These changes will perhaps be as significant to the world as the agricultural and industrial revolutions, both of which had drastic economic consequences.
Although the exact progression and timing of these changes are unpredictable, one thing is clear: The AI wave will not be possible without advancements in – and a stable supply of – hardware and software generally, and silicon specifically. The complexity of semiconductor design and manufacturing combined with rapid innovation and the vulnerability of the supply chain creates unique and challenging dynamics that in our view are reshaping leadership in the semiconductor industry.
Our forecast shows that the combined revenues of: 1) companies that supply manufacturing equipment, components and software to build fabrication facilities; 2) chip manufacturers; and 3) chip and AI software designers will approach $1T this decade. Our research suggests that four companies, Nvidia Corp., Taiwan Semiconductor Manufacturing Corp., Broadcom Inc. and Qualcomm Inc., will account for almost half of that trillion-dollar opportunity.
In this Breaking Analysis, we bring in theCUBE Research analyst emeritus David Floyer to quantify and forecast the dynamic semiconductor ecosystem. We compare market shares from 2010 with those of 2023 and provide a five-year outlook for more than a dozen of the top players in the industry. We also provide a view of where we see the overall market headed, our assumptions for the market and the top players, which firms we see winning and losing and why, with a bit of survey data from Enterprise Technology Research.
We’ll also address the following five items:
Let’s start with the macro impact that the generative AI awakening has had on information technology spending in the last two years. The data below from ETR shows the 19 sectors the company tracks each quarter. The vertical axis is spending velocity or Net Score and the horizontal plane is the Pervasion, or penetration of the sector within the survey.
We’ve shown this many times before, but note where AI was one month prior to ChatGPT’s launch in the October 2022 survey. It dropped just below the 40% magic line that month and since then has been up and to the right. Consequently, other sectors have been suppressed. As we’ve reported, 42% of customers indicate they’re funding AI by stealing from other budgets. And we know that generally enterprise AI return on investment is coming in small productivity wins at this time and for most organizations is not yet self-funding.
The point is that AI is consuming not only the conversation but also the spending momentum.
We believe there are three significant impacts for all organizations.
As such, our guidance to clients is a combination of all three is ideal. If you are not planning for a tenfold productivity improvement over the next five to 10 years, there are startups and competitors that will and risk taking your business.
OK, let’s cut right to the chase. Nvidia momentum is simply remarkable and has caught the attention of everyone in the industry. The pace of innovation coming out of the AI ecosystem generally and Nvidia specifically is astounding. Here’s a diagram that underscores the new era in computing that we’re in, catalyzed by large language models and the AI breakthroughs.
This chart shows the teraflop progression Nvidia has made since 2016. We’ve overlaid a depiction of the Moore’s Law progression. The comparison is remarkable with Nvidia demonstrating a 1000-times improvement in parallel/matrix computing (what Nvidia calls accelerated computing) in eight years versus a 100-times improvement from Moore’s Law in a decade.
It’s important to understand that in this episode where we’re forecasting the semiconductor industry ecosystem and we’re taking liberties with the scope. And by that we mean we’re modeling Nvidia as a full platform solution and a company that is building end-to-end AI data centers – what it calls the AI Factory. And its selling that capability through partners.
One of the key aspects of Nvidia’s moat is it builds entire AI systems and then disaggregate and sell them in pieces. As such, when it sells AI offerings, be they chips, networking, software and the like, it knows where the bottlenecks are and can assist customers in fine-tuning their architectures.
Nvidia’s moat is deep and wide in our view. It has an ecosystem and are driving innovation hard. Chief Executive Jensen Huang has announced that there’s another big chip behind Blackwell – no surprise – and it’s on a one-year “cadence”rhythm” for systems and networking and the new systems will run CUDA. Nvidia claims to be “all-in” on Ethernet, the company will continue to extend NVLink for homogenous AI Factories, and Infiniband’s roadmap continues.
Huang’s claim and bet is the more you spend with Nvidia, the more you save and the more revenue you can drive.
In addition:
In addition, the goal is to crank it up and introduce a new system every year. In our opinion, the value to the users, to hyperscalers and to anyone using these technologies is so high that, combined with the cost of creating alternatives, it means to us that for at least the next five years, Nvidia will be the dominant supplier in the AI data center.
Let’s get to the meat of this research and our five-year outlook for the ecosystem. The table below lays out how we see the semiconductor industry evolving. In the first column we show the players in the ecosystem comprising the chip designers such as Qualcomm, the chip manufacturers such as TSMC, three leading firms that do both — Intel, Samsung and Micron Technology Inc., equipment manufacturers such as ASML Holding NV and Applied Materials Inc., and software providers such as Cadence Design Systems Inc., which is in the “other” category.
Of course we’re also including Nvidia, which we believe has become and will continue to be the most important player in the market. Again we’ve pushed the envelope a bit in terms of the forecast and are forecasting Nvidia’s entire revenue stream beyond just chips.
For each company we’re showing their related revenue in 2010, 2023 and our forecast for each firm in 2028 with a CAGR for the relevant time period.
We ingested a series of relevant financial data for each firm and we combined this data with our fundamental assumptions to create a top down model of the industry as we describe here. We tested this data with a two external data points and added a third dimension, including: 1) company strategic forecasts based on their long-term financial frameworks; 2) inputs from various financial analysts that have made long-term projections for these companies; and 3) applying our own assumptions about how we see the market playing out.
We’ll share that our assumptions and the resulting forecasts deviate quite widely from generally accepted market narratives. In particular, the broad consensus when you take into account the publicly available data essential says that everyone wins and the disruption to existing firms will be modest. We don’t see it this way. Rather, we forecast a dramatic shift to matrix computing or so-called accelerated computing; and we see meaningful spending shifts causing market dislocation, particularly to traditional x86 markets.
The high-level findings in our market assessment are as follows:
Let’s look at the data in more detail by company, sorted by our 2028 projections in descending order. We’ll show the company, our projected CAGR and our revenue forecast for 2028.
In our view, Nvidia essentially has a monopoly somewhat similar to Wintel’s duopoly of the 1990s with the core GPU dominance and the AI operating system all within in the same company. We believe Nvidia’s growth rate will actually accelerate as it penetrates new markets and will surpass $160 billion in revenue by 2028.
Importantly, we’re including more than just chips in this forecast. Specifically, we’re assuming Nvidia’s full platform and portfolio revenue; and the assumption is that Nvidia’s continues to execute across its portfolio on a rapid cadence.
Nvidia has executed brilliantly. It has bet on very large chips and invested in GPUs, CPUs, networking and software, offering a complete solution and a complete data center that can be disaggregated. Our assumption and belief is Nvidia will sustain this cadence for at least the next five years.
TSM has become the go-to manufacturer for advanced chips. We have TSMC almost doubling in size over the next five years. Our core assumption is that volume economics will confer major strategic advantage to TSM and it will remain the world’s No. 1 foundry by far.
It’s important to note how TSMC is investing. The company just announced the A16, 1.6nm process targeted for 2026. We believe this will be a significant milestone in its manufacturing, with nanosheet transistors and the backside power delivery. TSMC calls this Super Power Rail. These innovations in our view are industry-leading and the company’s track record of execution and delivering volume at high yields will allow it to maintain leadership.
Next on the list is Broadcom and we’re only including its semiconductor revenue. As such we think that though the company’s CAGR slows to 10%, it’s really because in 2010 it was very small. Much of Broadcom’s 2023 revenue was dispersed in our model under the “other” category.
Broadcom has done a remarkable job through acquisitions and engineering. It doesn’t compete head-on with Nvidia in GPUs, although it is a major provider of silicon and AI chip IP for Google LLC, Meta Platforms Inc. and we think ByteDance Ltd. via its custom silicon group. We see Broadcom’s semiconductor business growing at a CAGR of 10% over the next five years, taking the division to 1.6 times its current size. It solves really difficult problems to connect all the GPUs, CPUs, NPUs, accelerators and high-bandwidth memories together. It is uniquely positioned to continue to win in the market. Broadcom plays in virtually all sectors, consumer, enterprise, mobile, cloud, edge.
Broadcom in our view is a very well-positioned and well-run company. Its focus particularly on networking is vital. High-speed networking of all types is going to be absolutely essential for AI processing and it’s entrenched in this market. In particular, it’s very well set up with major internet players that will be AI leaders. As such, Broadcom has early visibility on the most critical market trends.
Qualcomm is very well-positioned both in mobile and now in AI PCs. We see it getting a huge tailwind from the recently introduced Windows AI PC stack from Microsoft. We have Qualcomm on a similar trajectory as Broadcom in terms of its growth. Essentially, Microsoft, with its Windows Copilot in release 11, is following Apple Inc.’s moves from several years ago and that will be a big benefit for Qualcomm, which provides core silicon for AI PCs. This is more bad news for x86-based PCs.
Microsoft announced full support for Arm-based PCs based on Qualcomm. Now Dell, Lenovo and others have announced Arm-based PCs and suddenly you’ve got a whole plethora of these initiatives and they’re selling them on the basis of improved performance and a 24-hour battery life going directly after Intel’s PC installed base.
So you can see our forecast indicates that the four companies at the top, Nvidia, TSMC, Broadcom and Qualcomm, comprise around 45% of a $900 billion-plus market by 2028.
We’re forecasting Intel’s Foundry revenue to comprise about $22 billion of a $54 billion business in 2028. So unlike many, we’re forecasting no growth for Intel over the time period. We see the rise in foundry revenue unable to offset the decline in x86. Combined with our assumption that AMD continues to gain share in x86 markets, we have Intel data center and client revenue dropping from $45 billion in 2023 to $26 billion in 2028.
Intel is late with support for AI PCs and we’re projecting a 12- to 18-month delay in its 14A process, which is its big bet. It combines gate-all-around technology, what it calls RibbonFET, and backside power delivery, which Intel refers to as Power Via. The company hopes to be the first to use High NA EUV technology, which combined with these other innovations is extremely bold, but also likely to be delayed. Hence our assumption that 14A gets pushed.
Intel needs all three of these innovations to be successful and differentiate from the rest of the industry. We believe Intel has a very good chance of executing on two of the three simultaneously, but even that is risky. Our assumption is Intel’s 14A gets to volume production and high yields in 2028 (best case) or 2029 (likely case) but perhaps even 2030 (worst case).
The key to understanding Intel, in our view, is that it has lost the volume lead. Apple and TSMC have taken the lead and its Arm-based phones and PCs have given it a significant learning curve moat in our view, to Intel’s detriment.
If, however, Intel is able to succeed and deliver 14A in volume production as it plans in 2026 and can follow with its 10A 1nm node in late 2027, then our forecasts will be incorrect and Intel will in a much better position than we project.
ASML has unique differentiation that is going to remain unmatched. Essentially we see ASML as a monopoly that’s going to continue and it will be able to command whatever pricing it wants.
High-bandwidth memory has become a new enabler for AI. It’s in high demand and short supply and that is going to propel SK Hynix. We have SK Hynix growth actually accelerating with revenue growing from $25 billion today to $40 billion by 2028. High-speed memory is incredibly important and the company has multiple options in this space.
We think Samsung is going to struggle to get its advanced process working. We think it’s going to continue to face challenges and we think that constricts volumes and puts them in a cost dilemma. We’ve got Samsung basically flat from its $40 billion today.
Intel has said that it intends to be the No. 2 foundry by 2030. Given Samsung’s struggles, we think it is the right target for Intel. It’s just a matter as to whether Intel can get there. So, in that sense going after Samsung is the right move.
CEO Lisa Su has done an amazing job with this company. A key turning point was when AMD shed its fabs, despite co-founder Jerry Sanders once famously remarking, “Real men have fabs.” That didn’t really prove out for AMD in the long run. It took several years for the company to get back on track, but its persistence has paid off.
AMD is still very much tied still to x86. By 2028, the end of our forecast period, we still have 45% of AMD’s revenue coming from x86, which puts downward pressure on a big part of the company’s total available market. The good news is our assumptions call for AMD to continue to steal share from Intel and at the same time make progress in AI hardware.
Of course, Intel’s going to fight like crazy for its x86 data center share, but we’re more sanguine on AMD’s outlook as a chip designer. It’s not saddled by foundry, and though that x86 pressure is a negative, we believe AMD will continue to take share. It is just faster to market and actually has a quality product. For example, Oracle Corp. has just gone all AMD-based chips for their new Exadata systems, which was a big win for AMD.
We think Applied Materials continues to execute. It’s in a really good position. It has more competition than does ASML, but we’ve got it doing pretty well here, growing from $27 billion in 2023 to $35 billion with a 6% CAGR. We’re basically forecasting ASML, SK Hynix, Samsung, AMD and AMAT all around that $35 billion to $40 billion range.
Essentially what we’ve done here is model the value contribution within Apple’s hardware to the silicon and made some assumptions around its value contribution in the chain. We saw that Apple, based on our assumptions, grew at a 15% CAGR from 2010 to 2023 and we’ve got it at 12% from 2023 to 2028. We’re assuming a $33 billion contribution from silicon.
There have been ongoing reports that Apple’s going to sell silicon as a merchant supplier. We do not make that assumption in our figures. Nonetheless, Apple getting into the business of manufacturing its own chips was profound. It started with its A series in smartphones and now of course the M series in its newest laptops and iPads. It was the first to ship neural processing units both in iPhone and in PCs. Now it has to make major step-up as the AI PC competition heats up.
Apple quietly led the AI PC wave. It introduced large chips many years ago on iPhones and integrated the CPUs, NPUs and GPUs on the same chip. It has a large shared SRAM, which architecturally is a leading example and well-positioned for AI. Apple has a proven track record in silicon, for example evolving its M series, M1, M2, M3 and and now M4.
We believe Apple is a leader in designing silicon architectures required to go into AI and we assume it will quickly respond to the Qualcomm AI PC trend. In our view, Apple is a main reason why Microsoft is pushing support for Arm-based designs, because it was under pressure from Apple.
We believe Micron can accelerate its growth rate, propelled by high-bandwidth memory. Similar to SK Hynix, demand is way outstripping supply for Micron’s HBM. Micron has executed very well. We see an acceleration in their CAGR to 14% and nearly doubling revenue by 2028 from $16 billion in 2023 to $31 billion. Micron not only designs chips, it has been a successful manufacturer for years.
We grouped hyperscaler cloud providers into a single category. Hyperscalers design their own silicon and partner with merchant suppliers such as Broadcom and others. Our forecast for hyperscalers excludes Broadcom’s contribution of custom chips, for example. We’re not double counting here.
We think hyperscaler general-purpose, training and inference chips will be used for cost-sensitive applications such as inferencing at the edge. We assume they’re not going to keep pace with Nvidia at the high end, but they will get their fair share. We assume AWS Graviton accounted for about 20% of AWS workloads in 2023. Inferentia and Trainium were a smaller portion of AI workloads in 2023, as were the counterparts at Google and Microsoft. We assume a healthy contribution from hyperscalers, but they will not be a dominant factor in terms of disrupting Nvidia in our view.
Hyperscalers are introducing Nvidia IP. They really have to take Nvidia because they can’t make a comparable platform themselves. We assume it’s going to be cheaper for the next five years and as such they will continue to be large customers of Nvidia.
Other includes a long tail of suppliers across the value chain. You have Texas Instruments, GlobalFoundries, Chinese players such as Yangtze, CXMT, startups such as Cerebras, and many more.
We assume in our forecast that China doesn’t invade Taiwan and that hot wars don’t completely disrupt the market. And it also assumes that the AI PC market generally follows Apple’s trends from x86 to Arm. We show x86 at about 13% of the market revenue in 2010 dropping to 11% in 2023. And it’s projected to be 5% in 2028.
Here’s a visual of what we just went through. In the interest of time, we’ll just say that the two companies bucking the trend among the leaders are Intel and Samsung. Micron is in a different business and has uniquely figured out the combined model. And AI is brining new investment to a market that was always considered risky by investors. News flash: It still is.
Here’s our forecast for PCs going back to 2009. When PC volumes peaked in 2011, that was the beginning of Intel’s descent from the mountaintop, even though most people didn’t see it. David Floyer made the call in 2013. And the key points here are:
While we forecast a surge in PC volumes, it is important to understand that this does not signal a return to the dominance once enjoyed by PC chip manufacturers like Intel. The landscape is now heavily influenced by Arm-based chips, whose wafer volumes are ten times that of x86. Companies such as Nvidia, Apple and Tesla recognized this shift early and have leveraged Wright’s Law to gain significant cost and time-to-market advantages in Arm-based chip design and manufacturing.
This shift underscores the increasing value of Arm technology in reducing design costs and highlights the challenges faced by x86. The market dynamics have fundamentally changed, and Arm’s advancements have made it a dominant force, fundamentally altering the competitive landscape.
Let’s close on some of the key issues we haven’t hit.
The future of AI and its market dynamics are evolving rapidly, with significant implications for key players and emerging technologies. Our analysis highlights the pivotal trends and forecasts that will shape the AI landscape over the next decade, focusing on AI inference at the edge, energy needs, geopolitical risks and the potential shifts in semiconductor manufacturing.
The AI market is set for significant transformation, with AI inference at the edge poised to become the dominant workload. Energy innovations and geopolitical stability are crucial for sustaining this growth. Though Nvidia currently leads, the competitive landscape remains fluid, with potential shifts driven by technological advancements and market disruptors. Our analysis underscores the need to monitor these developments closely as they will shape the future of AI.
As always, we’ll be watching.
How do you see the market playing out over the next five years? What do you think of our assumptions and forecasts? Let us know your thoughts and thanks for being part of the community.
Thanks to Alex Myerson and Ken Shifman on production, podcasts and media workflows for Breaking Analysis. Special thanks to Kristen Martin and Cheryl Knight, who help us keep our community informed and get the word out, and to Rob Hof, our editor in chief at SiliconANGLE.
Remember we publish each week on theCUBE Research and SiliconANGLE. These episodes are all available as podcasts wherever you listen.
Email [email protected], DM @dvellante on Twitter and comment on our LinkedIn posts.
Also, check out this ETR Tutorial we created, which explains the spending methodology in more detail.
Note: ETR is a separate company from theCUBE Research and SiliconANGLE. If you would like to cite or republish any of the company’s data, or inquire about its services, please contact ETR at [email protected] or [email protected].
All statements made regarding companies or securities are strictly beliefs, points of view and opinions held by SiliconANGLE Media, Enterprise Technology Research, other guests on theCUBE and guest writers. Such statements are not recommendations by these individuals to buy, sell or hold any security. The content presented does not constitute investment advice and should not be used as the basis for any investment decision. You and only you are responsible for your investment decisions.
Disclosure: Many of the companies cited in Breaking Analysis are sponsors of theCUBE and/or clients of theCUBE Research. None of these firms or other companies have any editorial control over or advanced viewing of what’s published in Breaking Analysis.
THANK YOU
How Nvidia, TSMC, Broadcom and Qualcomm will lead a trillion-dollar silicon boom
Financial software maker AuditBoard to be acquired by Hg Capital for $3B+
IBM opens up to new collaborations in effort to stay ahead in the AI race: theCUBE analysis
AI dominates every tech event, but Nvidia remains the biggest winner
Google to invest $350M in Indian e-commerce marketplace Flipkart
Howie Xu launches new tech show as AI landscape continues to evolve
How Nvidia, TSMC, Broadcom and Qualcomm will lead a trillion-dollar silicon boom
INFRA – BY . 1 MIN AGO
Financial software maker AuditBoard to be acquired by Hg Capital for $3B+
CLOUD – BY . 23 HOURS AGO
IBM opens up to new collaborations in effort to stay ahead in the AI race: theCUBE analysis
AI – BY . 1 DAY AGO
AI dominates every tech event, but Nvidia remains the biggest winner
AI – BY . 1 DAY AGO
Google to invest $350M in Indian e-commerce marketplace Flipkart
APPS – BY . 1 DAY AGO
Howie Xu launches new tech show as AI landscape continues to evolve
AI – BY . 1 DAY AGO
Forgot Password?
Like Free Content? Subscribe to follow.