EnCharge’s Analog AI Chip Promises Low-Power and Precision

FermaS Lar in Princeon University A museum is like all the ways that engineers have tried to make very effective of artificial intelligence using analog phenomena instead of digital computing. On one seat lies the most efficient energy retinal computer that has ever made. In another, you will find a memory -resistant slice that can calculate the largest matrix of any numbers The analog of artificial intelligence The system so far.
There is no commercial future, according to Verma. Less than that, this part of his laboratory is a cemetery.
The analog of artificial intelligence It has seized the imagination of architects for years. It combines two main concepts that must be made Automated learning Less largely energy is dense. First, it limits the expensive bits movement between memory chips and Processor. Second, instead of 1S and 0S of logic, it uses current flowing physics to perform the automatic learning key account efficiently.
Although the idea was, many AI’s analog plans were not delivered in a way that could take a bite of artificial intelligence appetite. Verma will be known. He tried them all.
But when IEEE SICTRUM It was visited a year ago, there was a slide at the back of the Verma Laboratory, which represents some hope in the analog intelligence and the resourceful computing of the energy needed to make artificial intelligence useful and incompatible everywhere. Instead of account with the current, the shipping slide summarizes. It may seem illogical, but it may be the key to overcoming noise that hinders all analogous AI.
This week, Verma Startup EnararGure Ai The first segment unveiled based on this new structure, En100. The startup company claims that the chip is a lot of work of artificial intelligence with the performance of every Watt to 20 times better than competing chips. It is designed in a single processor card that adds 200 trillion operation per second at 8.25 watts, which aims to maintain the battery life in artificial intelligence. LED computers. Moreover, a 4 -chip card is targeted, 1000 processes of operations per second for artificial intelligence stations.
Current and chance
In automated learning, “it became clear, through stupid luck, the main process that we do is the complications of the matrix,” says Verma. This takes a set of numbers mainly, hitting them with another group, and adding the result of all these strikes. Early, the engineers noticed a coincidence: two basic rules Electrical engineering It can do exactly this process. The Om Law says that you get the current by hitting the voltage and delivery. The current Kirschov Law says that if you have a set of currents at a point of a set of wires, the total of these currents is what leaves this point, “the current Kirchhoff Law says. Therefore, basically, each group of power voltages pushes the current input through the resistance (the connection is the opposite of the resistance), the multiplication of the voltage value, and all these currents add to the production of one value. Mathematics, I did.
It looks good? Well, improve. Many of the data that forms the nerve network is “weights”, the things through which the inputs are doubled. The transfer of these data from memory to the processor’s logic to do the work is responsible for a large part of the energy Graphics processing units Spending. Instead, in most AI analog plans, weights are stored in one of several types Memory is not volatile As the value of delivery (the above resistance). Since the weight data is the place where the calculation should be, it should not be moved as much, and the provision of a pile of energy.
A mixture of free mathematics and fixed data is the accounts that only need Thousands of trillion of energy joules. Unfortunately, this is almost not the analog artificial intelligence efforts offered.
The problem with the current
The main problem with any kind of Analog computing It was always the reference ratio to noise. The analog intelligence has by the truck load. The signal tends, in this case, that the sum of these strikes tends to be overwhelmed by many possible sources of noise.
“The problem is that semiconductor devices are messy things,” says Verma. Suppose you have a representative nerve network where the weights are stored as connectors in the individual RRAM cells. These weight values are stored by relatively determining High voltage Via RRAM cell for a specific period of time. The problem is that you can adjust the exact same effort on two cells for the same time, and these cells will end with slightly different delivery values. Worse, these delivery values may change with the temperature.
The differences may be small, but remember that the process adds many strikes, so that noise is enlarged. Worse, then the resulting current is converted into an effort is the entrance to the next layer Nervous networksA step that adds more noise.
The researchers attacked this problem from the perspective of computer science and device physics. Hoping to compensate for noise, the researchers invented ways to bake some knowledge of physical organs in their nerve network models. Others focused on making devices that act as expected as possible. IBMWhat he did Intensive research in this fieldAre both of them?
Such technologies are competitive, if not commercially successful, in smaller systems, aimed at providing chips Low energy Automated devices on the edges The Internet of Things Networks. Early participants Legendary artificial intelligence It produced more than one generation of analog AI chip, but it is competing in a field where low -energy digital chips succeed.
EN100 computers card is the new analog AI chip structure.EnararGure Ai
The Engram solution strips noise by measuring the amount of charging instead of the shipping flow in the double -learning talisman. In the traditional analog AI, beating depends on the relationship between voltage, delivery and current. In this new scheme, this depends on the relationship between voltage, capacity and charging – as shipping is mainly equal to the effort of capacity.
Why is this difference important? It comes to the beating component. Instead of using some weak and weak devices such as RRAM, the English uses condensate.
The capacitor is basically two conductors that chocolate insulation. The difference between the conductors causes the accusation to accumulate one of them. The key to them for the purpose of machine learning is that its value, capacity, is determined according to its size. (More Mosul space or less space between conductors means more capacity.)
“The only thing they rely on is engineering, mainly the space between the wires,” says Verma. “This is the only thing you can control very well Cmos Technologies. “English builds a group of microcredit condensate in the layers of copper -interconnected copper from their processors.
Data that forms most of the nerve network model, weights, is stored in a set of Digital memory Cells, each connected to a condenser. Then the data that the nervous network is analyzed by the weight of the weight is struck using a simple logic built into the cell, and the results are stored as fees on the capacitors. Then the ceiling turns into a position where all shipments accumulate from the beating results and the result is numbered.
While the first inventionAnd dating back to 2017, was a great moment for the Verma laboratory, saying that the main concept is very old. He says: “It is called Switchared Capacitor, it turned out that we have been doing it for decades,” he says. It is used, for example, in analog to high digital. “Our innovation was to know how you can use it in a structure that works in computing in memory.”
a race
The Verma’s Lab and English Laboratory spent years in proving that technology was programmed and canable and participating in improving it with brown staple and programs that suit the needs of artificial intelligence that differ greatly from what they were in 2017. The products resulting with early arrival developers, the company– I recently raised $ 100 million from Samsung Venture, FoxconnOthers – they offer another round of early cooperation to reach.
But Engring enters a competitive field, and among the competitors Kahuna is the Great, Nafidia. At the Grand Developer event in March, GTC, NVIDIA announced plans for Computer product Building around GB10 CPU-GPU collection and Workstation It was built around the next GB300.
There will be a lot of competition in the low space area. Some of them are used even a form of computing in memory. D-Matrix and AxeleraFor example, I participated in the analog AI promise, including memory in computing, but do everything digitally. Each developed all of them Sram Memory cells that are stored, doubled and running the combination also digitally. There is no less than one person more symmetrical than artificial intelligence in this mixture, starting.
Verma is, which is not surprising, optimistic. He said in a statement. “We hope this radically expands what you can do with artificial intelligence.”
From your site articles
Related articles about the web