The FAANGS and the Foundries – EE Times Europe

Advertisement

Advertisement
EE Times Europe
Advertisement
Hyperscale companies rule the electronics world. The so-called FAANGs — Facebook, Amazon, Apple, Netflix, and Google, along with Alibaba, Tencent, Baidu, and Microsoft — far outweigh the world’s biggest chipmakers by whatever metric you choose. During the next 10 years, we can be sure that these giants will shape the industry in a way that few of us can imagine.
That’s evident in the relationship between Apple and TSMC. Apple accounts for about one-fifth of the sales at the world’s largest foundry and more than half of its 7-nm capacity. As more FAANGs design chips customized for artificial intelligence and high-performance computing, this trend is likely to transform the semiconductor industry and put more pressure on its traditional participants.
Two years ago, Google announced that its Tensor Processing Unit (TPU) had beaten Intel’s Xeon and Nvidia’s GPU in machine-learning tests by more than an order of magnitude. Google made the TPU at TSMC using the foundry’s 28-nm technology.
About a year ago, Amazon announced its Graviton chips — also made at TSMC on a 28-nm process — to power customers’ websites and other services. The Graviton runs circles around off-the-shelf chips from Intel or AMD that are designed for general applications. The Amazon chip runs very efficiently in the company’s environment, helping cut costs. Customers with access to Graviton-powered servers have pared expenses for some services by half. Now, Facebook is also designing its own chips.
Companies such as Intel, AMD, and Nvidia must be alarmed that the FAANGs are stepping on their turf. To be sure, the FAANGs aren’t likely to go on an acquisition spree and trigger another wave of buyouts in the semiconductor industry. The internet giants are already under scrutiny for antitrust violations in Washington, D.C. But they might try other under-the-radar tactics, such as poaching talent from the chipmakers. They certainly can afford it.
The FAANGs have been working with chip design houses such as Global Unichip, which is dedicated to TSMC. They could easily use their ties to the Taiwanese fabless company, for example, to recruit key people.
So far, most of the AI chips designed by the FAANGs are for data centers. Earlier this year, Qualcomm also announced plans to enter this business, which is forecast to be worth US$17 billion by 2025.
Intel is expected to unveil its Nervana NNP-L1000 later this year, but the AI chip’s performance is likely to lag that of the latest Nvidia data center GPUs.
Recognizing images and speech and processing big data are stretching chip functionality beyond personal computing. AI chips help reduce data center power consumption, which up to now has been doubling annually — an unsustainable rate.
While more than 40 companies worldwide are developing AI-specific accelerators, most are designing chips for inference, not model training, where Nvidia dominates the multibillion-dollar market. But what happens when more companies make AI silicon for edge devices? Not much has happened yet, but the time is coming. AI on the edge will be a much larger market than AI in data centers, according to most experts.
There are still many unanswered questions about how the compute capability necessary to run edge AI applications will actually be deployed in the real world, according to Steve Roddy, vice president of Arm’s Machine Learning Group. In an automobile, there may be multiple distributed systems or one centralized system, he said. In a factory, there may be wireless connections from every smart device back to a centralized CPU or distributed computing.
Roddy sees the same issue with cities: How dense will the compute be in 5G networks versus how centralized? “One of the goals that Arm has is to create the systems in the middleware layers that will allow applications to move seamlessly from different compute models as those models evolve over time,” he said.
Clearly, Arm plans to be one of the key players in distributed AI. So does Google. “Sensors that are able to do smart things, like voice interfaces or accelerometers, are going to become so low-power and so cheap that they’re going to be everywhere,” said Pete Warden, leader of Google’s TensorFlow Mobile/Embedded Team.
“Computational devices powered by AI will touch our lives in almost every conceivable way,” said Byron Reese, author of the whitepaper “AI at the Edge: A GigaOm Research Byte.” “The power, security, and speed requirements of these devices necessitate that inference be performed at the edge, where the data is collected. This will enable an ever more common way of scaling the digital devices that will come to play a role in our lives.”
ST, Xilinx, and a number of other traditional chipmakers aim to address the market for inference at the edge, but they are still at the threshold of this business, which promises to far exceed the market for AI in data centers.
The traditional players are standing at the starting line with a group of much larger newcomers. Bet on the newcomers to take the race. ■
Alan Patterson is a contributing editor at EE Times Europe and a technology journalist who has worked in Asia for most of his career.

Advertisement

Advertisement

Advertisement

Advertisement
 

Advertisement

Advertisement

source

Facebook Comments Box

Trả lời

Email của bạn sẽ không được hiển thị công khai. Các trường bắt buộc được đánh dấu *