Artificial Intelligence Model Might learn Without no Data

Artificial Intelligence Model Might learn Without no Data

Typically, machine learning requires tons of examples. You need to show an artificial intelligence model thousands of images of horses to get it to recognize a horse. That is why technology is computationally expensive. Thus, it is quite different from human learning. Often, a child needs to see just a few examples (even only one) before recognizing it for life.

Children sometimes, in fact, do not need any examples to identify something. If they see photos of a rhino and horse, and you told them a unicorn is something in between, they can recognize the mythical creature in a picture book the first time they see it.

A new paper from the University of Waterloo in Ontario suggests that artificial intelligence models should also do that. That is a process the researchers call LO-shot, or ‘less that one’-shot, learning. The artificial Intelligence model should accurately recognize more objects than the number of examples it was trained on. That might be a big deal for a field that has increasingly inaccessible and expensive. It is inaccessible because data sets used become ever more extensive.

The researchers were experimenting with the popular computer-vision data set known as MNIST. They first demonstrated that idea during MNIST. MNIST contains sixty thousand training images of handwritten digits from zero to nine. MNIST is to test out new ideas in the field.

Artificial Intelligence Model

In a previous paper, MIT researchers introduced a new technique. They ‘distilled’ giant data sets into tiny ones. As proof of concept, they had compressed MNIST down to only ten images. They did not select pictures from the original data set. Nevertheless, they were carefully optimized and engineered to contain an equivalent amount of information to the full set. As a result, the artificial intelligence model could achieve nearly the same accuracy as one trained on all MNIST’s images when trained exclusively on the ten images.

The Waterloo researchers were willing to take the distillation process further. They wondered if it was possible to shrink sixty thousand images down to ten, so why not squeeze them into 5?  They realized; the trick was to create images that blend multiple digits. Then, they fed them into an artificial intelligence model with ‘soft or hybrid labels.

That is how the artificial intelligence model will be able to learn. Nonetheless, we only analyzed the theoretical side of the research. Let us wait and see how successful the start-up will be.