To make speedy local weather predictions, Google hopes to tap machine learning and artificial intelligence. An artificial intelligence system uses satellite images. It is to produce high-resolution and “nearly instantaneous” forecasts, the tech giant explained that in a paper and accompanying blog post with a roughly one-kilometer resolution on average and a latency of only 5-10 minutes. The analysts and research behind it say that “even at these early stages of development,” it outperforms traditional models.
The system takes a physics-free and data-driven approach to weather modeling. It means it learns to approximate atmospheric physics not by incorporating prior knowledge but from examples alone. Underpinning it is a convolutional neural network that takes as input images of patterns of weather and transforms them into new output images.
The Google researcher explained that a convolutional network comprises a layer sequence. In the series, each layer is a set of mathematical operations. For example, in a U-Net, layers are arranged in an encoding phase that decreases the image resolutions passing through them. During the encoding period, a separate decoding phase expands the low-dimensional image representations.
Per a multispectral satellite image, inputs to the U-Net contain one channel. In other words, if there were in an hour 10 satellite images collected, and each of those images was taken at ten wavelengths, the image input would have 100 channels.
Artificial Intelligence vs. Three Baselines
The engineering team trained a model. They compared three baselines to model’s performance: An optical flow algorithm which attempted to track moving objects through a sequence of images; persistence model in which each location was assumed to be raining in the future at the same rate it was raining then; and HRRR (the High-Resolution Rapid Refresh) numerical forecast from the Atmospheric and National Oceanic Administration (specifically the 1-hour total accumulated surface prediction).
The quality of their system was generally superior to all three models; the researchers are reporting. Nevertheless, when the prediction horizon reached about 5 to 6 hours, the HRRR began to outperform it. Despite this, researchers not that the High-Resolution Rapid Refresh had a computational latency of 1-3 hours or significantly longer than that of theirs.
The researchers explained that the numerical model in HRRR could make better long predictions, partially because it uses a full 3D physical model. From 2D images, it is harder to observe cloud formation. Also, it’s harder for machine learning methods to learn convective processes. Nevertheless, there is a possibility to combine these two systems; High-Resolution Rapid Refresh for long-term forecasts and machine learning models for rapid predictions. It will most probably produce better results overall, said the researchers.
Applying machine learning directly to 3D observations is the future work of the researchers.
Google is not the only one tapping artificial intelligence to predict the weather and disasters or natural events. IBM launched a new forecasting system developed by The Weather Company last year. It is capable of providing local and “high-precision” forecasting across the globe. The Weather Company is the weather forecasting and information technology company acquired in 2016.
Let’s see how far machine learning goes.