By the end of 2020, 48% of global CIOs will deploy artificial intelligence, according to a Gartner survey. Nevertheless, businesses waiting for significant disruption in artificial intelligence/machine learning landscape will miss smaller developments.
The days are gone when on-premises versus cloud was a hot topic of debate for enterprises. Even conservative organizations are talking about open and cloud sources today. That is no wonder that cloud platforms revamp their offerings for including artificial intelligence/machine learning services.
The number of RAM and CPUs are no longer the only way to scale or speed up, with Machine Learning solutions becoming more demanding. More algorithms are optimizing for specific hardware than ever before – be it “Wafer Scale Engines,” TPUs, or GPUs. This shift towards more specialized equipment to solve artificial intelligence/machine learning problems will accelerate. To address only the most fundamental issues, organizations will limit their use of CPUs. The risk of being obsolete will render the infrastructure of generic compute for machine learning/artificial intelligence unviable. That is the reason enough for the organizations to switch to cloud platforms.
The increase in specialized hardware and chips will also lead to incremental algorithm improvements leveraging the hardware. New hardware may allow the use of machine learning/ artificial intelligence solutions that were earlier considered slow and impossible. A lot of the open-source tooling that currently powers the generic hardware needs to be rewritten for benefitting from the newer chips. Recent examples of algorithm improvements include Reformer for optimizing the use of computing power and memory, and Sideways to speed up DL training by parallelizing the training steps.
Many forecasts that there will be a gradual shift in the focus on data privacy towards privacy implications on machine learning models. A lot of emphases has been placed on how and what we are gathering and how we use it. But machine learning models are not exactly black boxes. Over time it will be possible to infer the model inputs based on outputs. That will lead to privacy leakage. Challenges in model and data privacy will force organizations to embrace federated learning solutions. Google released TensorFlow Privacy last year. TensorFlow privacy is a framework that works on the principle of differential privacy and the addition of noise for obscuring inputs. A user’s data never leaves their device, with federated learning. These models of machine learning are smart enough and have a small enough memory footprint for running on smartphones and learn from the data locally.
In most of the cases, the reason for asking a user’s data was for personalizing their individual experience. For instance, Google Mail is using the particular user’s typing behavior for providing autosuggest.
Currently, organizations struggle for productionizing models for reliability and scalability. The people who are writing the models are not necessarily experts on how to deploy them with model security, safety, and performance in mind. At one point machine learning models will become an integral part of critical and mainstream applications. That will inevitably mean attacks on models similar to the denial-of-service attacks mainstream applications currently face.