Announcement of Cerebras Processor - Cerebras Wafer Scale Engine (WSE) or Cerebras Wafer Scale Engine -
Cerebras WSE is produced by TSMC. The technical process is 16 nm FinFET. This Taiwanese manufacturer also deserved a monument for the release of Cerebras. The production of such a chip required the highest skill and solving a lot of problems, but it was worth it, the developers assure. The Cerebras chip is actually a supercomputer on a chip with incredible throughput, minimal power consumption, and fantastic parallelism. At the moment, this is the ideal solution for machine learning, which will allow researchers to start solving problems of extreme complexity.
Each Cerebras WSE die contains 1,2 trillion transistors organized into 400 AI-optimized compute cores and 000 GB of local distributed SRAM. All this is connected by a mesh network with a total throughput of 18 petabits per second. The memory bandwidth reaches 100 PB / s. The memory hierarchy is single-level. There is no cache, no overlap, and access delays are minimal. It is the ideal architecture for accelerating AI-related tasks. Bare numbers: Compared to the latest graphics cores, the Cerebras chip provides 9 times more on-chip memory and 3000 times faster memory transfer speed.
Cerebras computing cores - SLAC (Sparse Linear Algebra Cores) - are fully programmable and can be optimized to work with any neural networks. Moreover, the architecture of the kernels initially filters data represented by zeros. This frees computing resources from the need to perform idle operations of multiplication by zero, which for a load of sparse data means faster calculations and maximum energy efficiency. Thus, the Cerebras processor is hundreds or even thousands of times more efficient for machine learning in terms of chip area and consumption than current AI and machine learning solutions.
Making a chip of similar size
Source: 3dnews.ru