Artificial Intelligence can Design Chip in Six Hours Approx

Artificial Intelligence can Design Chip in Six Hours Approx

There is a preprint paper coauthored by Google artificial intelligence lead, Jeff Dean, the Google chip implementation and infrastructure team, and Google Research. The article describes a learning-based approach for designing the chip. Moreover, the chip can improve over time and learn from experience.

As such, it can become better. to generate architecture for unseen components. The papermakers are claiming that it completes the design in under six hours on average. It is much faster than the weeks it takes human experts in on the loop.

While the work is not entirely novel, it advances the state of the art in that it implies an on-chip transistor placement and can be mostly automated. It builds upon a technique proposed by Google engineers in a paper published in March. Moreover, the Google researchers’ technique can enable cash-strapped startups to develop their chips for artificial intelligence and other specialized purposes. This is in case it becomes publicly available. Nonetheless, it can help to shorten the chip design cycle. This is to allow hardware to adapt to rapidly evolving research efficiently.

Dean had an interview late last year with VentureBeat. He said that basically, they have design tools that can help do some layout in the design process. Nevertheless, they have routing experts and human placement work with those design tools to iterate many, many times over. It is a multi-week process to go from the design you want to have it physically laid out on a chip with the right constraints in place. Moreover, you need the wire length and power to meet all the design roles for whatever fabrication process you are doing. Jeff Dean says that they can have a machine learning model that learns to place a chip itself.

How the Chip Works

Moreover, the authors of the paper are aiming to place a netlist graph of logic gates, memory, and more, onto a chip canvas, design, area (PPA), and optimizing power. Moreover, it is adhering to constraints on placement density and routing congestion. The graphs are ranging in size from millions to billions of nodes grouped in thousands of clusters. Typically, they evaluate the metrics of the target from hours to over a day.

artificial intelligence

The researchers made a framework that directs an agent, trained through learning reinforcement, to optimize chip placements. An artificial intelligence model outputs a probability distribution over the available placement location. Meanwhile, a value model estimates the reward expected for the current placement. It is all thanks to the ID, netlist, and the current node.

The above-mentioned agent places components sequentially until it completes the netlist and will not receive a reward until the end. Components are sorted by the size of descending to guide the agent to select which elements are to be put first. Thus, it places the more significant segments first, which reduces the chances that there is no available placement for it later.

It is required to create a data set of 10,00 chip placements to train the agent. The researchers build the agent by picking five different chips on a netlist first.