Nvidia Vs. AMD Vs. Intel: Which AI Stock Is Best As Competition Heats Up? – Forbes

Getty Images
In the artificial intelligence stock universe, this is clearly no three-way battle. Nvidia is the undisputed AI leader commanding more than 90% market share in data-center GPUs and more than 80% market share in AI processors. At best, Advanced Micro Devices (AMD) and Intel (INTC) are actively competing with AI chips to seek positioning as viable alternatives for Nvidia’s H100 (the company’s graphics processing unit). But Nvidia is well ahead in the AI game, already evolving to a more sophisticated H200 and the new Blackwell platform later this year. AMD and Intel may have plenty of catching up to do. The more critical threat to Nvidia is the attack on the monopoly of CUDA, its proprietary software stack that allows developers to leverage the parallel processing capabilities of Nvidia GPUs to accelerate machine learning workloads.
From a stock price perspective, Nvidia is knocking it out of the park. Shares of Nvidia have run up more than 200% in the past year positioning the AI chip giant as the third largest company in the U.S. with a $2.97 trillion market cap, trailing only Apple’s $3.17 trillion and Microsoft’s $3.21 trillion market values. AMD stock, not quite in the leagues of Nvidia, has nevertheless fared well with a nearly 25% rally in the past year reaching a market cap of $257 billion that is 2x of Intel’s. The laggard among the three is Intel with its shares down 6% on the year and down 40% from its December highs mainly due to its weaker-than-expected second-quarter outlook. This article endeavors to provide insights on the following questions.

Author reporting and Yahoo Finance
The brain trust at Forbes has run the numbers, conducted the research, and done the analysis to come up with some of the best places for you to make money in 2024. Download Forbes’ most popular report, 12 Stocks To Buy Now.
Intel shares are down more than 30% in the past five years vs. AMD’s nearly 400% climb and Nvidia’s whopping 3,000-plus percent rally. Intel stock is facing challenges from the “technology gap that was created by over a decade of underinvestment,” to quote Intel CEO Patrick Gelsinger. AMD has been a primary beneficiary of Intel’s manufacturing missteps in the past.
Intel’s problems started with missing the boat on the 10nm and 7nm processes in chip manufacturing. Processors made using smaller but advanced nm (nanometer) processes are typically faster, perform better and more power efficient.
Two companies that flourished from Intel’s manufacturing fumbles include the Taiwan Semiconductor Manufacturing Company (TSMC) and AMD. While TSMC cruised through the 10nm and 7nm processes, AMD, a fabless semi, grew its share of X86 server CPU market from almost zero to 23.9% through the first quarter of 2024.
Intel missed out on the mobile revolution as well. The iPhone could have had an Intel chip, but today about 99% of premium smart phones are powered by an Arm-based chip. That was a costly mistake, because Apple later stopped using Intel chips in its computers, too, starting in 2020 and transitioned to its own Arm-based chips, breaking a 15-year partnership with Intel. For reference, Apple Macs represent roughly 10% of global PC market share. Intel’s loss was Arm’s gain.
Arm captured 9% of the overall CPU server market in 2023, even as Intel continues to dominate with a 61% market share. Arm uses the RISC architecture vs. Intel’s X86 instruction set that is used by most PCs. Arm-based chips use less power vs. X86-based chips and lately Arm chips have experienced a significant rise in adoption. Arm architecture is at the core of both Amazon Web Services’ custom server Graviton chips and Qualcomm’s flagship Snapdragon chips.
It appears so. Here’s why. Nvidia is cutting out Intel entirely from its latest “Blackwell” GPU. Two Nvidia B100 GPUs are paired with one Arm-based processor. For reference, AI-oriented GPU-based servers often leverage multiple Nvidia GPUs, sometimes eight or more, alongside an Intel CPU to facilitate parallel processing, essential for AI tasks such as deep learning and neural network training. Nvidia’s latest Grace Hopper Superchip combines its own GPUs with Arm’s high-performance Neoverse cores.
Arm-based chips are powering Microsoft’s surface laptops that are shipping on June 18, These laptops are equipped with Qualcomm’s Snapdragon X Elite or Plus chip to compete more effectively against Apple’s MacBook laptops.
Google’s first custom Arm-based CPUs–the Axion Processors–designed for the data center, will be available to Google Cloud customers later this year. Google says Axion processors will deliver 30% better performance than the fastest general-purpose Arm-based processors available in the cloud and deliver up to 50% better performance and up to 60% better energy-efficiency than comparable current-generation x86-based CPUs.
It should be noted that Intel has a lower market cap vs. even Arm.
Last week, Intel began to ship the first of its next generation Xeon server processors–a Xeon 6 “efficiency”-model (E-core) designed for public and private clouds where power efficiency and performance are critical. The more powerful “performance” version (P-core) of the Xeon 6– designed to run computationally intensive AI models–is slated to arrive in the third quarter.
A lot hinges on the Xeon 6 chips for Intel in its attempts to reclaim data center market share for x86 chips from AMD. A Reuters report citing data from Mercury Research states that “Intel’s share of the data center market for x86 chips has declined 5.6 percentage points over the past year to 76.4%, with AMD now holding 23.6%.”
Intel notes that its Xeon 6 P-core processors will perform AI inferencing 3.7 times better than AMD EPYC processors, while Xeon 6 E-core processors will provide 1.3 times better performance per watt over AMD EPYC chips on media transcoding workloads.
The Xeon 6 ‘efficiency’ model has a 144-core count giving it a lead over AMD’s 4th generation EPYC processors with up to 128-core count. An increasing core count means superior performance as multiple cores enable parallel processing.
However, AMD is not resting on its laurels. AMD’s fifth generation EPYC processors, code-named Turin, will feature up to 192 cores and arrive in the second half of this year. In turn, Intel is planning to release a 288 e-core of the Xeon 6 early next year.
Intel is pricing its Gaudi 2 and Gaudi 3 AI chips much cheaper than Nvidia’s H100 chips. Intel claims that the new Gaudi 3 accelerator delivers “50% on average better inference and 40% on average better power efficiency” vs. Nvidia’s H100 at “a fraction of the cost.” The Gaudi 3 will be widely available in the third quarter.
A Gaudi 3 accelerator kit, which includes eight AI chips, is priced at $125,000, and the previous generation Gaudi 2 costs $65,000. The pricing of the Gaudi 3 accelerator kit appears comparable with AMD’s flagship AI accelerators–the Instinct MI300 lineup, also pitted directly against Nvidia’s H100 GPUs. Reportedly, an Instinct MI300X GPU sells for approximately $15,000.
AMD’s MI300X GPU, which has been around longer, could not dent the demand for Nvidia’s H100 AI GPUs that reportedly cost between $30,000 and $40,000, about 2x its prices. So, it is uncertain if the low prices of Gaudi 3 will make any sizable impact on H100 demand, but it should be noted that Gaudi 3 has gained support from major players like Dell, HPE, Lenovo, Supermicro, Asus, Gigabyte and QCT. AMD expects to launch the MI350 series of chips next year. The MI350 is based on an entirely new architecture, and expected to have 35x better inference capabilities.
Probably in a bid to neutralize the cost advantage touted by competition, Nvidia has signaled solid returns on investment (ROI) of 5x to 7x for customers spending on Nvidia infrastructure. In fact, Nvidia is its own best competition, as it shifts to a new “one-year rhythm” to release new chip architecture, marking a significant acceleration from its two-year cycle.
In a question to how Nvidia customers, who have spent billions of dollars on existing products, would respond to its newer offerings that quickly surpass the capabilities of existing ones and outpace the rate at which the value of the existing products depreciates, Huang suggested that performance-averaging will be the smart way for businesses to deal with “a whole bunch of chips coming at them” when making and saving money are immediate priorities and time is of essence.
Nvidia has deflated concerns of any demand slowdown as it transitions from its current Hopper AI platform to the more advanced next generation Blackwell system. Blackwell has an inference capability that is 30x of Hopper’s, while consuming 25x less cost and energy. So, the analysts were worried if customers would hold off on Hopper orders because of the upcoming Blackwell launch. However, Nvidia said it witnessed increasing demand for Hopper through the quarter (which is after it announced Blackwell) and expects demand to outstrip supply for some time as the transition happens. Besides, Blackwell systems are designed to be backward-compatible, making the transition easy for customers. The demand for both Hopper and Blackwell platforms is well ahead of supply and is expected to continue well into the next year.
Intel expects about $500 million in Gaudi 3 sales this year, while AMD sees about $3.5 billion in annual AI chip sales and Nvidia’s data center business with its AI GPUs is estimated to generate a whopping $57 billion in sales in the second half of the year.
If Nvidia’s GPUs continue to be in overwhelming demand, it is because of its CUDA software stack that allows developers to leverage the parallel processing capabilities of Nvidia GPUs for accelerating machine learning workloads.
Nvidia was ready with the battle-seasoned CUDA years before the boom in deep learning, giving it the early mover advantage. The expansive libraries and tool sets built on CUDA and the integrated native support for CUDA GPU acceleration from major learning frameworks such as TensorFlow, PyTorch, Caffe, Theano and MXNet set the ball rolling. CUDA became the golden standard for GPU acceleration and became deeply ingrained into all aspects of the AI ecosystem.
CUDA alternatives like AMD’s MIOpen, Intel’s oneAPI and even vendor-agnostic frameworks like OpenCL have stumbled due to limited user adoption stemming from the inadequate tooling and support compared to CUDA. Migrating sophisticated neural network codebases from CUDA to alternate programming paradigms continues to pose a solid challenge.
Despite the attempts to unseat CUDA going south, Nvidia has never been negligent about competition. On the contrary, it ensures its market dominance, by constantly evolving its CUDA capabilities and high-performance libraries to accelerate various aspects of deep learning workflows on Nvidia GPUs. Nvidia’s partnerships with the likes of Berkeley university and Facebook help optimize popular deep learning models on CUDA. Besides, Nvidia is the darling of a risk-averse enterprise clientele that would prefer a proven technology as the CUDA.
The efforts to reduce reliance on Nvidia and democratize access to non-CUDA-centric acceleration is gaining momentum.
AMD is taking aim at NVDA’s dominance, by leveraging its open-source ROCm framework that competes directly with the de-facto CUDA standard. ROCm is supported by Google’s open-source machine learning framework -TensorFlow, while PyTorch, another major framework, has introduced initial native AMD GPU integration on an experimental basis to reduce the CUDA lock-in.
Other initiatives from PyTorch include Layer-wise Adaptive Rate Scaling (LARS) to aid in scaling deep learning tasks across diverse hardware platforms. A unified memory allocator in PyTorch 1.11 brings performance improvements to AMD GPUs and Apple M1 chips with unified memory architectures, while the graph mode execution backend introduced in PyTorch 1.5 extends support to workflows on non-Nvidia hardware like Intel integrated GPUs and budget AMD cards with typically lesser memory capacities.
OpenAI’s heavy investment in CUDA/ROCm portability layers like Triton also aims to reduce reliance on Nvidia.
Intel is investing heavily to render oneAPI as a reliable alternative to CUDA. While CUDA is not disappearing overnight, the momentum in shift towards CUDA alternatives underscores the reality that the era of proprietary AI hardware stacks may not last forever.
Stop chasing shadows in the market. Forbes’ expert analysts have pinpointed the 12 superstars poised to ignite returns in 2024. Don’t miss out—download 12 Stocks To Buy Now and claim your front-row seat to the coming boom.
Sovereign AI: Nvidia expects its Sovereign AI revenue to approach the high single-digit billions this year, from nothing last year, by helping jumpstart the AI ambitions of nations across the world.
Automotive vertical: Automotive is expected to be Nvidia’s largest enterprise vertical within the data center segment this year, driving a multibillion revenue opportunity across on-prem and cloud consumption.
Blackwell platform: The next generation Blackwell platform, which enables real-time generative AI on trillion-parameter large language models, is in full production with shipments slated to begin in the second quarter, and ramp in the third quarter with data centers standing up for customers in the fourth quarter. Nvidia expects to see a lot of Blackwell revenues this year.
Spectrum-X: In the first quarter, Nvidia started shipping its new Spectrum-X Ethernet networking solution that enables Ethernet-only data centers to accommodate large-scale AI. Spectrum-X is ramping in volume with multiple customers, and should ramp to a multibillion-dollar product line within a year.
Intel is hoping that 2024 would be the trough for operating losses in its struggling Foundry business, which deepened its operating loss to $7 billion in 2023 from a loss of $5.2 billion in 2022 on a 31% Y-o-Y revenue drop to $18.9 billion. Intel expects the Foundry business to break even midway between the current quarter and the end of 2030, and to drive considerable earnings growth over time.
In March, Intel was awarded up to $8.5 billion in direct funding, and the option to receive federal loans of up to $11 billion, under the CHIPS Act that aims to build semi fabs on U.S. soil, to protect against a supply crunch if China ever invaded Taiwan. The proposed funding will help Intel advance its commercial semiconductor projects in Arizona, New Mexico, Ohio and Oregon, while supporting its plans to invest more than $100 billion in the U.S. over five years to expand U.S. chipmaking capacity and capabilities and accelerate AI technologies.
It was almost a cinch that Intel would be a key beneficiary of the CHIPS Act, because it operates factories, or fabs that manufacture chips, in addition to designing processors. AMD and Nvidia, are fabless, and only design the chips that are manufactured by TSMC. Intel hopes to get a solid piece of the contract manufacturing business, as it attempts to position its fabs for making AI chips for rival semi companies, as well as its own.
For the first quarter, Intel reported a 10% drop in revenue for its foundry business, and operating losses of $2.5 billion. But Intel expects quarter-over-quarter improvement in its foundry business until 2030.
Intel sees the foundry segment achieving 40% non-GAAP gross margins and 30% operating margins by the end of 2030, while it plans to steer the Intel Products business towards a 60% gross margin and 40% operating margin.
After lagging TSMC for many years, Intel finally expects to return to process technology leadership by 2025 with Intel 18A, and its five-nodes-in-four-years (5N4Y) process roadmap on track. The 18A is Intel’s next-generation technology to manufacture 1.8 nm chips. This will be equivalent to TSMC’s proposed 2nm around the same time-frame.
However, TSMC is refuting Intel’s claim by saying that its N3P process will maintain technical superiority over Intel’s sub-2nm 18A. N3P is tracking ahead to be production ready in the second half of this year and will be on the market earlier. TSMC noted that the N3P process will match Intel’s 18A node in power, performance, and density despite the difference in size (of 3nm versus 1.8nm). TSMC also says that it will enjoy the early mover advantage that will offer it a technical superiority over the 18A because the N3P nodes will come at a lower cost with proven prowess.
TSMC’s arguments about early-to-market advantages somewhat resonate with the comments from Nvidia founder and CEO Jensen Huang about the strategic imperative of staying ahead in the AI race rather than aiming for merely incremental improvements, when he posed the question, “do you want to be repeatedly the company delivering groundbreaking AI or the company delivering 0.3% better?
Intel has mostly been a laggard and is now playing catch-up, vs. enjoying an early mover advantage. However, this may not signify much, when Microsoft–the largest U.S. company is making a bet on Intel’s 18A process to manufacture a forthcoming in-house designed chip. As of May 30, Intel noted that it had 6 Intel 18A external foundry customers and a lifetime deal value of greater than $15 billion.
The same Microsoft that eagerly embraced Intel’s 18A, said the new “Copilot+ PC” AI features for its Windows 11 will require at least 40 TOPS (trillion operations per second), which implied that it could not run on Intel’s Meteor Lake hardware that delivered 11.5 TOPS. Microsoft chose Qualcomm’s Snapdragon X Elite that offers a Neural processing Unit (NPU) with 45 TOPS.
The Windows Copilot+ PCs help users to be more productive and creative. For reference, you can quickly retrieve information by typing in cues, quickly find, create, summarize and analyze information without opening multiple apps and files and convert your words into a PowerPoint presentation with visuals.
With the next generation Lunar Lake, Intel is moving past its Meteor Lake limitations. The chip giant is promising 50% faster graphics performance on Lunar Lake compared to Meteor Lake. The Lunar Lake will have a NPU that delivers 48 TOPS. This implies that it could outperform Qualcomm’s Snapdragon X Elite.
However, Qualcomm’s Snapdragon X Elite-powered Copilot+ PCs will arrive on June 18, giving Qualcomm a head-start over Lunar Lake, which is expected to be launched in the third quarter. Meanwhile, AMD’s Ryzen AI 300 mobile SoC sets a new benchmark with 50 TOPS (above Microsoft’s Copilot+ requirement), and notebooks based on it will debut in July. AMD says it is working with Microsoft to meet the new Copilot+ standards.
After years of being a laggard, Intel stock offers a contrarian opportunity with its new products, processes, support from the U.S. government and major tech companies, and a projected turnaround for its Foundry business. The former chip legend has strived to make up for a lost decade via investments and strategic partnerships in its foundry services. Intel’s willingness to open up its fabs for third-party customers will not only generate new revenues, but position it on the good side of the U.S. government that wants its silicon chips homegrown and the benefits could reflect as further tax incentives, fundings and federal loans. However, strong execution of strategic priorities will be key for potential upside. With a multi-year execution cycle still ahead, risks include any delays in launch timelines and Intel continuing to cede the head-start advantage to rivals. Key watchpoints on progress will include quarterly earnings reports and product launches going as planned.
Intel stock valuation: INTC trades at a one-year forward price/earnings of 16x, (based on its 2025 EPS estimate of $1.98). The year 2025 is taken as reference, as Intel expects to return to process technology leadership next year. Assuming a conservative multiple rerating to 19x (below INTC’s 5-year average p/e multiple of 23x), we arrive at a stock price target of around $37 that represents about 20% upside from current stock price levels.
Despite the stunning 3,000+% rally in the past five years, the Nvidia stock has more steam left. Dethroning Nvidia will be a herculean challenge for competition, which at best can likely position itself as a viable alternative and collectively claim approximately 20% to 25% of market share in the AI hardware landscape. As long as Nvidia evolves and enhances its CUDA moat, it has nothing much to be concerned about. CUDA is its biggest strength and weakness. If the CUDA monopoly is broken, the competitive edge can unravel quickly. Efforts of its customers-cum-rivals are already underway to reduce the CUDA lock-in. The consolation is Jensen Huang is well aware of the situation and he is not going to sit back and watch his lifetime work sink into oblivion. In any case, displacing CUDA is more than an overnight process. Expansion beyond cloud service platforms to multiple multibillion-dollar verticals, including consumer Internet companies, and enterprise, Sovereign AI, automotive and healthcare customers will inspire the next wave of growth for Nvidia. That said, no stock will have a straight upward trajectory and will provide buying opportunities along its journey and that’s true of the Nvidia stock as well.
Nvidia Stock Valuation: NVDA trades at a one-year forward price/earnings of 34x, (based on its 2025 EPS estimates of $3.55). Assuming a conservative multiple rerating to 40x (below the 5-year average p/e multiple of 47x), we arrive at a stock price target of $142 that represents roughly 17% upside from current stock price levels.
AMD stock is a key AI bet, as tech giants and AI frameworks strive to break the CUDA dominance. AMD’s open-source ROCm pitted against the CUDA de-facto standard is supported by Google, PyTorch, OpenAI and more. AMD has perfected the art of being a runner-up after several years of vying with Intel for X86 CPU server market share, even as it continues to evolve and compete against Intel, Nvidia and Qualcomm in several aspects of AI. The AMD stock is 30% off its 52-week highs reached in March this year. The selloff creates a buying opportunity.
AMD Stock Valuation: AMD trades at a one-year forward price/earnings of 29x, (based on its 2025 EPS estimate of $5.54). Assuming a conservative multiple rerating to 35x (below the 5-year average p/e multiple of 43x), we arrive at a stock price target of nearly $194 that represents 20+% upside from current stock price levels.
Two other stocks outside of the trio–Nvidia, AMD and Intel–include TSMC (TSM) and Arm (ARM). Both stocks appear well positioned to benefit from the heating AI chip battles, especially TSMC.
Shares of Nvidia, AMD and Intel likely offer 17% to 20% upside potential from current price levels. TSMC and Arm also appear well positioned to benefit from the heating AI chip battles.
Please note that I am not a registered investment advisor and readers should do their own due diligence before investing in this or any other stock. I am not responsible for the investment decisions made by individuals after reading this article. Readers are asked not to rely on the opinions and analysis expressed in the article and encouraged to do their own research before investing.
Contract chip manufacturer Taiwan Semiconductor Manufacturing Company Limited (TSM) appears well positioned to benefit from the ongoing AI boom, as it manufactures AI chips for major technology companies including the AI bellwether Nvidia. 
Nvidia is the undisputed AI leader commanding more than 90% market share in data-center GPUs and more than 80% market share in AI processors. AMD, although not in the leagues of Nvidia, is upping its game with new AI products. 
After years of being a laggard, Intel stock offers a contrarian opportunity with its new products, processes, support from the U.S. government and major tech companies,  and a projected turnaround for its Foundry business
Investing in any stock is risky, especially in technology stocks that pose a high risk/reward.  Nvidia, AMD or Intel stock is no exception. However, these stocks seem positioned to benefit from strong AI tailwinds.

The brain trust at Forbes has run the numbers, conducted the research, and done the analysis to come up with some of the best places for you to make money in 2024. Download Forbes’ most popular report, 12 Stocks To Buy Now.

source

Facebook Comments Box

Trả lời

Email của bạn sẽ không được hiển thị công khai. Các trường bắt buộc được đánh dấu *