PLACEMENT OF COMPUTE AND MEMORY FOR ACCELERATED DEEP LEARNING

Number of patents in Portfolio can not be more than 2000

United States of America

APP PUB NO 20250110808A1
SERIAL NO

18978383

Stats

ATTORNEY / AGENT: (SPONSORED)

Importance

Loading Importance Indicators... loading....

Abstract

See full text

Techniques in placement of compute and memory for accelerated deep learning provide improvements in one or more of accuracy, performance, and energy efficiency. An array of processing elements comprising a portion of a neural network accelerator performs flow-based computations on wavelets of data. Each processing element comprises a compute element to execute programmed instructions using the data and a router to route the wavelets. The routing is in accordance with virtual channel specifiers of the wavelets and controlled by routing configuration information of the router. A software stack determines placement of compute resources and memory resources based on a description of a neural network. The determined placement is used to configure the routers including usage of the respective colors. The determined placement is used to configure the compute elements including the respective programmed instructions each is configured to execute.

Loading the Abstract Image... loading....

First Claim

See full text

Family

Loading Family data... loading....

Patent Owner(s)

Patent OwnerAddress
CEREBRAS SYSTEMS INC1237 E ARQUES AVE SUNNYVALE CA 94085

International Classification(s)

Inventor(s)

Inventor Name Address # of filed Patents Total Citations
Funiak, Stanislav St. Lucia, AU 13 156
James, Michael Edwin San Carlos, US 36 783
Kibardin, Vladimir Palo Alto, US 4 19
Lauterbach, Gary R Los Altos, US 52 1136
Lie, Sean Los Altos, US 52 1250
Morrison, Michael Sunnyvale, US 84 1679

Cited Art Landscape

Load Citation

Patent Citation Ranking

Forward Cite Landscape

Load Citation