Today, UF (University of Florida) and Nvidia announced plans to build the fastest artificial intelligence supercomputer in academia. They want to enhance the capabilities of the University of Florida’s existing HiperGator supercomputer with the DGX SuperPod architecture. Nvidia expects that the system will be up and running by early 2021. In fact, Nvidia says that it will deliver 700 petaflops (one quadrillion floating-point operations per second) of performance.
Some researchers in the artificial intelligence community believe that capable computers will achieve paradigm-shifting advances. This must be in conjunction with reinforcement learning and other techniques. The Massachusetts Institute of Technology, Underwood International College, University of Brasilia, and MIT-IBM Watson AI Lab recently published a paper. There, they found that deep learning improvements have been strongly reliant on increases in computing power. OpenAI researchers released an analysis showing that from 2012 to 2018, the amount of computing power used in the most substantial artificial intelligence training rungs grew more than 300,000 times with a 3.5-month doubling time. Thus, this far exceeded the pace of Moore’s law.
Nvidia and the University of California claim that the revamped HiPerGator will give students and faculty the tools to apply artificial intelligence across focused areas. This includes areas for food insecurity, urban transportation, personalized medicine, data security, and the aging population. Artificial Intelligence models at the University of California Health are being deployed to help monitor, collect, and organize patient conditions in real-time through a system known as DeepSOFA.
HiPerGator will be roughly 18 times as powerful as the University of Texas at Austin’s Frontera. Thus, HiPerGator will be among the first to receive Nvidia’s DGX A100 systems. It provides 320 gigabytes of memory and the latest high-speed Mellanox HDR 200Gbps interconnects. DGX A100 packs eight 7-nanometer Ampere-based A100 Tensor Core GPUs. Moreover, according to Nvidia, a single A100 GPU’s 54 billion transistors can execute five petaflops of performance.
HiPerGator will gain 140 DGX A100 systems powered by 1,120 NVIDIA A100 Tensor Core GPUs, as part of the boost. Moreover, it will be coupled with 15 kilometers of optical cable and four petabytes of storage from DDN. Furthermore, it will benefit from Nvidia’s suite of artificial intelligence applications. This AI’s framework covers data analytics, recommendation systems, inference acceleration, and artificial intelligence training.
Nvidia said its partnerships with the University of California would extend to ongoing collaboration and support in three principal artificial intelligence areas beyond HiPerGator. The Nvidia Deep Learning Institute will be the main department to work with the University of California. They will work to develop programming, curriculum, and coursework tailored to address the needs and encourage the interest of teens and young adults in artificial intelligence, STEM (mathematics), engineering, and technology. The company will also found an Nvidia Artificial Intelligence Technology Center at the University of California. There, Nvidia employees and graduate fellows will partner up to advance the field of artificial intelligence. Moreover, Nvidia product engineers and solution architects will team up with the University of California on the optimization, operation, and installation of the supercomputing resources on campus, including HiPerGator.
The University of California says that this will lay the groundwork for the integration of artificial intelligence with all its disciplines.