Weebit Nano tại Hội nghị tự động hóa thiết kế năm 2024

DAC 2024 Banner

Innovative Memory Architectures for AI – Don’t Miss this DAC Session!

We all know that the proliferation of AI applications is happening at an unprecedented rate while at the same time, memories aren’t scaling along with logic. This is one of many reasons that the industry is exploring new memory technologies and architectures.

When it comes to embedded non-volatile memory (NVM), the incumbent technology, flash, has reached its limits in terms of power consumption, speed, endurance and cost. It is also not scalable below 28nm, so it’s not possible to integrate flash and an AI inference engine together in a single SoC at 28nm and below for edge AI applications.

Embedded ReRAM is the logical alternative. Embedding ReRAM into an AI SoC would replace off-chip flash devices, and it can also be used to replace the large on-chip SRAM to store the AI weights and CPU firmware. Because the technology is non-volatile, there is no need to wait at boot time to load the AI model from external NVM.

ReRAM is also much denser than SRAM which makes it less expensive than SRAM per bit, so more memory can be integrated on-chip to support larger neural networks for the same die size and cost. While on-chip SRAM will still be needed for data storage, the array will be smaller and the total solution more cost-effective. With ReRAM, designers can have a single chip implementation of advanced AI in a single IC while saving die size and cost.

ReRAM will also be a building block for the future of edge AI: neuromorphic computing. In this paradigm, also called in-memory analog processing, compute resources and memory reside in the same location, so there is no need to ever move the weights. The neural network matrices become arrays of ReRAM cells, and the synaptic weights become the conductance of the NVM cells that drive the multiply operations.

Because ReRAM cells have physical and functional similarities to the synapses in the human brain, it will be possible to emulate the behavior of the human brain with ReRAM for fast real-time processing on massive amounts of data. Such a solution will be orders of magnitude more power-efficient than today’s neural network simulations on traditional processors.

At the Design Automation Conference (DAC) 2024, Gideon Intrater from Weebit Nano will go in depth on this topic during his presentation, ‘ReRAM: Enabling New Low-power AI Architectures in Advanced Nodes.’

Gideon’s presentation will be part of the session, ‘Cherished Memories: Exploring the Power of Innovative Memory Architectures for AI Applications,’ which will explore cutting-edge technologies transforming the landscape of memory design. Organized by Moshe Zalcberg of Veriest Solutions and moderated by Raul Camposano of Silicon Catalyst, other presenters include experts from RAAM Technologies and Veevx Inc.

Don’t miss this DAC session!

  • Cherished Memories: Exploring the Power of Innovative Memory Architectures for AI Applications
  • Time: 10:30 AM – 12:00 PM
  • Location: IP Room: 2012, 2nd Floor
Also Read:

Weebit Nano Brings ReRAM Benefits to the Automotive Market

2024 Outlook with Coby Hanoch of Weebit Nano

ReRAM Integration in BCD Process Revolutionizes Power Management Semiconductor Design

Share this post via:

source

Facebook Comments Box

Trả lời

Email của bạn sẽ không được hiển thị công khai. Các trường bắt buộc được đánh dấu *