Alphabet has developed a custom chip for running machine learning algorithms – SiliconANGLE News

UPDATED 11:37 EDT / MAY 19 2016
by Maria Deutscher
Amazon Inc. is apparently not the only web giant that designs its own processors. At its annual developer summit this week, Alphabet Inc. revealed the existence of a homegrown server chip that can supposedly run machine learning algorithms ten times faster than publicly-available alternatives.
This performance advantage is the result of a secretive development effort that began about two years ago, shortly before Amazon’s own entry into semiconductor world with the $350 million acquisition of Annapurna Labs Ltd. Alphabet tried to keep the project under wraps but the world found out in October when it was caught recruiting chip designers for a then-unspecified internal initiative. In its new blog post, the search giant reveals that it’s since started mass-producing the homegrown chip and now has thousands of units deployed throughout its data centers.
The Tensor Processing Unit, as the chip is called, is a custom ASIC built to fit inside the existing external storage slots of Alphabet’s likewise internally-designed server racks. It’s sourced from a third party supplier and programmed with special instructions that facilitate the processor’s high performance by sacrificing some precision when carrying out calculations. As a result, the company claims that its processor can not only run machine learning faster but also do so while consuming less power than a conventional server accelerator.
Alphabet estimates that the semiconductor industry will have to go through two more cycles of Moore’s law to organically match the speed of the Tensor Processor Unit, which amounts to about seven years. But given how chip makers are finding it increasingly difficult to improve the density of their processors as they move down the nanometer scale, it will likely take much longer than that in practice. Thankfully, however, the search giant doesn’t plan on keeping the machine learning ecosystem waiting another decade.
Alphabet’s head of infrastructure, Urs Hölzle, revealed at its developer event that it will publish a paper detailing the innovations in the Tensor Processor Unit this fall. The goal is presumably to let chip makers incorporate the technology into their designs and thus make it available for the broader machine learning community. Of course, there’s a chance the document will only contain partial specifications given the importance of machine learning to the search giant’s business and the massive investment required to develop a custom processor. But the move should nonetheless provide a major boon for artificial intelligence projects in the years to come.
THANK YOU
FTC bans noncompete clauses that prevent US workers from changing jobs
OpenAI enhances security, control and cost management for enterprise API users
Tesla shares climb over 10% amid plans for new, more affordable models next year
IBM reportedly nearing deal to acquire HashiCorp
AWS updates Amazon Bedrock with new foundation models, AI management features
Microsoft open-sources Pi-3 Mini small language model that outperforms Meta’s Llama 2
FTC bans noncompete clauses that prevent US workers from changing jobs
POLICY – BY MIKE WHEATLEY . 7 HOURS AGO
OpenAI enhances security, control and cost management for enterprise API users
AI – BY MIKE WHEATLEY . 8 HOURS AGO
Tesla shares climb over 10% amid plans for new, more affordable models next year
EMERGING TECH – BY DUNCAN RILEY . 9 HOURS AGO
IBM reportedly nearing deal to acquire HashiCorp
CLOUD – BY MARIA DEUTSCHER . 10 HOURS AGO
AWS updates Amazon Bedrock with new foundation models, AI management features
AI – BY MARIA DEUTSCHER . 12 HOURS AGO
Microsoft open-sources Pi-3 Mini small language model that outperforms Meta’s Llama 2
AI – BY MARIA DEUTSCHER . 14 HOURS AGO
Forgot Password?
Like Free Content? Subscribe to follow.

source

Facebook Comments Box

Trả lời

Email của bạn sẽ không được hiển thị công khai. Các trường bắt buộc được đánh dấu *