Cerebras is an AI processor of incredible size and power

Announcement of Cerebras Processor - Cerebras Wafer Scale Engine (WSE) or Cerebras Wafer Scale Engine - took place as part of the annual Hot Chips 31 conference. Looking at this silicon monster, it's not even surprising that they could release it in the flesh. The boldness of the idea and the work of the developers, who dared to develop a crystal with an area of ​​46 square millimeters and sides of 225 cm, are surprising. A whole 21,5-mm wafer is required to manufacture one processor. With the slightest mistake, the rejection rate is 300%, and the price of the issue is even hard to imagine.

Cerebras is an AI processor of incredible size and power

Cerebras WSE is produced by TSMC. The technical process is 16 nm FinFET. This Taiwanese manufacturer also deserved a monument for the release of Cerebras. The production of such a chip required the highest skill and solving a lot of problems, but it was worth it, the developers assure. The Cerebras chip is actually a supercomputer on a chip with incredible throughput, minimal power consumption, and fantastic parallelism. At the moment, this is the ideal solution for machine learning, which will allow researchers to start solving problems of extreme complexity.

Cerebras is an AI processor of incredible size and power

Each Cerebras WSE die contains 1,2 trillion transistors organized into 400 AI-optimized compute cores and 000 GB of local distributed SRAM. All this is connected by a mesh network with a total throughput of 18 petabits per second. The memory bandwidth reaches 100 PB / s. The memory hierarchy is single-level. There is no cache, no overlap, and access delays are minimal. It is the ideal architecture for accelerating AI-related tasks. Bare numbers: Compared to the latest graphics cores, the Cerebras chip provides 9 times more on-chip memory and 3000 times faster memory transfer speed.

Cerebras is an AI processor of incredible size and power

Cerebras computing cores - SLAC (Sparse Linear Algebra Cores) - are fully programmable and can be optimized to work with any neural networks. Moreover, the architecture of the kernels initially filters data represented by zeros. This frees computing resources from the need to perform idle operations of multiplication by zero, which for a load of sparse data means faster calculations and maximum energy efficiency. Thus, the Cerebras processor is hundreds or even thousands of times more efficient for machine learning in terms of chip area and consumption than current AI and machine learning solutions.

Cerebras is an AI processor of incredible size and power

Making a chip of similar size demanded a lot of unique solutions. It even had to be packed into the case almost by hand. There were problems with the supply of power to the crystal and its cooling. Heat removal became possible only with liquid and only with the organization of zonal supply with vertical circulation. Nevertheless, all problems were solved and the chip came out working. It will be interesting to learn about its practical application.

Cerebras is an AI processor of incredible size and power



Source: 3dnews.ru

Add a comment