Foundation technology of the digital age is the semiconductor and CPU. It gave Silicon Valley its name. It is sitting at the heart of the computing revolution.
That computing revolution transformed society’s facet over the past half-century.
Moreover, Intel introduced the world’s first microprocessor in 1971. Since then, the pace of improvement in computing capabilities has been relentless and breathtaking. Computer chips today are many millions of times more potent than they were fifty years ago, in line with the Law of Moore.
Thus, the power of processing has skyrocketed over the decades. Nevertheless, the underlying architecture of the computer chip has until recently mainly remained remarkably static. Innovation in silicon has entailed further miniaturizing transistors to squeeze more of them into integrated circuits, for the most part. Companies like AMD and Intel were thriving for decades by reliably improving capabilities of CPU in a process that Clayton Christensen will identify as sustaining innovation.
Nevertheless, that is changing dramatically today. Artificial Intelligence ushered in a new golden age of innovation of semiconductor. There are unique limitless and demands opportunities for machine learning. Thus, it spurred entrepreneurs for revisiting and rethinking the most fundamental tenets of chip architecture ever, for the first time in decades.
They aim to design a new type of chip, purpose-built for artificial intelligence. It will most probably power the computing of the next generation. In all of the hardware today, it is one of the most significant market opportunities.
Moreover, the prevailing architecture of the chip has been the CPU for most of the history of computing. Today, CPUs are ubiquitous because they are powering most of our data centers, your laptop, and your mobile device.
The legendary John von Neumann conceived in 1945 the CPU’s underlying architecture. Moreover, since then, its design has remained virtually unchanged. Von Neumann is still producing most of the computer machines.
The dominance of the CPU across use cases is a result of its flexibility. CPUs are machines of general-purpose. Moreover, it is capable of carrying out any computation that software requires effectively. Nevertheless, the CPU’s key advantage is versatility. Today’s leading artificial intelligence techniques demand an intensive and particular set of calculations.
Deep learning is entailing the execution of iterative of millions or billions of relatively simple multiplication and addition steps. Deep learning is fundamentally trial-and-error-based, Grounded in linear algebra. It means that matrices are multiplied, parameters are tweaked, and figures are summed repeatedly across the neural network as the model gradually is optimizing itself.
That computationally intensive, repetitive workflow has a few important implications for the architecture of hardware. Parallelization is the ability of a processor to carry out many calculations at the same time rather than one by one. Thus, parallelization becomes critical. Moreover, deep learning is involving the continuous transformation of vast volumes of data. Thus, the computational core as close together as possible enables efficiency gains and massive speed by reducing data movement.
CPUs are ill-equipped for supporting the machine learning’s distinctive demand—CPUs process not in parallel, but rather in computations sequentially.
- Trading Instrument