Open AI’s analysis highlighted the major factors which are driving innovation in artificial intelligence today. Some of the key indicators of the trend are data, algorithmic advances and computing power. The non-profit AI research company’s latest analysis showcased that the amount of computing power used for training AI models has increased significantly – it has gone up to 3.5 month-doubling time, as compared to Moore’s Law 18-month doubling period. The analysis reveals that since 2012, this metric has grown by 300,000 times which led key progress in applications such as speech recognition, computer vision, pattern recognition among others.
Another key insight from the analysis is that even though algorithmic innovation and data is difficult to track, computing power, thanks to the arrival of GPUs and TPUs, can be quantified. It therefore helps understand the measure of AI progress due to compute. This trend represents an increase by roughly a factor of 10 each year. This growth has largely been driven by custom hardware which allows more operations to be performed per second for a given price (GPUs and TPUs).
AI Computing Boom Will Turn Into Three Way Race
This computing boom also led to many Deep Learning innovations — CNNs (Convolutional Neural Networks) and RNNs (Recurrent Neural Networks). The next wave of progress will come from Generative Adversarial Nets (GANs) and Reinforcement Learning, with some help thrown in from Question Answering Machines (QAMs) like IBM Watson. As already mentioned by Nitin Srivastava in an earlier interview with Analytics India Magazine, gradually, the AI computing race will turn into a race of three:
- High Performance Computing (HPC)
- Neuromorphic Computing (NC)
- Quantum Computing (QC)
On the other hand, chip maker Intel is betting big on probabilistic computing as a major component to AI that would allow future systems to comprehend and compute with uncertainties inherent in natural data and will allow researchers to build computers capable of understanding, predicting and decision-making. As part of the research, the leading chipmaker established Intel Strategic Research Alliance for Probabilistic Computing to foster research and partnership with academia and startup communities and brings innovations from lab to real world. The core areas the company wants to address are — benchmark applications, adversarial attack mitigations, probabilistic frameworks and software and hardware optimisation.
OpenAI’s analysis notes that the cost will end up limiting the parallelism side of the trend and physics will limit the chip efficiency side. The AI research organisation further emphasises that companies deploying large training runs today employ hardware that cost in the single digit millions of dollars to purchase. However, the bulk of computing is spent on inference (deployment) and not training, meaning organisations have to buy a larger fleets of chips for training.
AI Chip Explosion
Today, every AI hardware startup and chip company is working on optimising high performance computing – the path is usually to stick to Deep Neural Net architectures and make it faster and easier to access. There may also be benefits realised from simply reconfiguring hardware to do the same number of operations at a reduced cost.
While Intel, Nvidia, and other traditional chip makers are working on capitalising on the new demand for GPUs, others like Google and Microsoft are busy developing proprietary chips of their own that make their own deep learning platforms a little faster. Google’s TensorFlow platform has emerged as the most powerful, general purpose solution backed up the proprietary chips, the TPU. Meanwhile, Microsoft is touting non-proprietary FPGAs while AI-focused hardware startups are working to make AI operations smoother. Case in point is California-based Samba Nova Systems which is powering a new generation of computing by creating a new platform.
According to reports, this startup believes there is still room for disruption despite NVIDIA’s GPUs become the de facto standard for deep learning applications in the industry. The company which raised a $56 million funding in Round A wants to build a new generation of hardware that can work on any AI-focused device, be it a chip powering self-driving technology to even a server, news reports indicate. Other startups operating in a similar area are Graphcore and China’s Horizon Robotics which are also plowing investment in hardware and giving a stiff competition to GPUs — the backbone of all intensive computational applications for AI-related technologies.
Practically every large company from Facebook to Baidu invested in GPUs to fast-track their work around deep learning applications and train complex models. while in terms of efficiency, GPUs are pegged to be 10 times more efficient than CPUs, in terms of power consumption, NVIDIA claims GPUs are also driving energy efficiency in the computing industry.
The post Here’s How Increasing Computing Power Has Kick-Started AI Innovation Boom appeared first on Analytics India Magazine.