The "neural revolution" and its results
Last summer, hundreds of thousands of people downloaded the Prisma app that turns any picture into a Van Gough masterpiece. A couple of months later, they could listen to the album by an nonexistent "Neural Defence" band, with songs which were written by a neural network and stylized after Egor Letov's works. Still, besides the recent boom of interest from the general public, the neural revolution happened a lot earlier.
The basics of neural networks as one of machine learning methods were formulated as early as in 1943, even before the term Artificial Intelligence (AI) was first mentioned. The first version of a neuron (neural network cell) was proposed by Warren McCulloch and Walter Pitts, and fifteen years later, Frank Rosenblatt presented the simplest neural network that could already discern objects in two-dimensional space.
Yet, the turning point happened later: in 2007, scientists from Toronto University created machine learning algorithms for neural networks, and in 2012, researchers from the same university applied deep neural networks which could recognize objects on photos and videos with minimal error.
Now, no one is surprised when Facebook recognizes them on their friend's photos, and different applications that can style a photograph after a Monet painting have become something common. Even when searching images in Yandex, the network is already better in defining a Siberian husky from a common one than most of us.
Neural networks processing an image. Credit: livejournal.com
How does it work?
Figuratively speaking, if one wants to teach the network to recognize a sunset on a photo, he has to "feed" it millions of photos with and without sunsets, labeled correspondingly. Then, the network will choose an algorithm which will represent photographs as vectors, where vectors of "sunsets" will be similar, and other vectors — really different from them. The same principle works with other objects, as well.
"To make the network give us these vectors, we have to train it. For that, it requires numerous images that are labeled — like "this is a bird" or "this is a dog". And there have to be thousands of them. Then, we tell the network to find an algorithm that will allow representing these images as numbers. As a result, we get numbers which are similar for particular groups of images (birds, for instance), but different from the other (dogs). This makes the network discern these groups from now on. All in all, neural networks can be trained for many different purposes. They can be taught to discern different people, or understand whether a bridge is lowered or not. It all depends on the data we give it", explains Ludmila Kornilova, a Bachelor's student at the Department of Engineering and Computer Graphics.
She got the chance to practice creating her own algorithms during the hackathon
e on machine learning by the Data Science Study Club. Its participants used the tensorFlow library for machine learning; Ludmila's team, for instance, worked with the Resnet neural network from Microsoft. They've decided to test its abilities in a fun way — to teach it to choose appropriate Pokemon for images of different people.
"We did it for fun. I think this whole Pokemon thing is something of past, yet it is still easy to remember. So, we thought that we'll like working on this project," shares Ludmila. "As training a neural network is a resource-intensive process that we couldn't accomplish during the hackathon, we’ve decided to take an already trained network. The Resnet network specializes on discerning objects from each other. The objects can be any — cups, tables, whatever. Yet, the network was definitely not trained to discern Pokemon or define their likeness with people".
ITMO University. Ludmila Kornilova
How does the network understand similarities and differences? Surely, for neural networks it works differently than for humans. For them, such things as form of the nose or color of eyes are something abstract. That's why they look for other features that help discern one object from another. For the network to define portrait-like similarity, one would have had to add algorithms for finding faces on a photo, and train the network to discern facial features; yet, Ludmila's team decided to not go into such details and let the network think for itself.
"We didn't use algorithms for finding one's face on a photo; yet, the network still found similarities. Here's a fun example: if there was anything spiral on a photo, then the program chose a Pokémon with something spiral. Also, it discerned common shapes, some specific turn of the head, for instance", adds the student.
Actually, you can try the program yourself and find your Pokemon on http://olimp-union.com/pokemon. Yet, Ludmila does not plan on continuing the experiment — she is going to study computer science and machine learning further, and participate in some exchange program. She is also working on several more projects, one of which is a service based on vector representation of words.
"As of now, I am doing a Stanford course on machine learning. I have a project in mind; it will use vector representation of words. This is a different field of machine learning which deals with words, and not images. What is the project's idea? There are several algorithms that allow obtaining vectors of statements, and comparing them with vectors of concepts, like "right" or "wrong". Thus, we can see how well can a neural network discern these concepts and will it be able to tell, whether "to kill a kitten" is "worse" than "to buy an ice-cream". Using this as base, I want to create a mock service which will help make decisions, where a neural network recommends you what to do", shares Ludmila Kornilova.
Networks already know what you want
Then again, technologies of content generation — texts, images, music, smart news feeds, and technologies that offer you goods even before you understand you want them — all of this is already here. And the entertainment apps and services are only the tip of the iceberg. Among the promising fields this technology can be applied in are medicine, robotics, transport and others.
For example, Manuel Mazzara and Leonard Johard from the Innopolis University already participate in the BioDynaMo project. With support from Intel and CERN, they are planning to create a prototype that can simulate the cerebral cortex on a large scale. They plan to use it to increase the efficiency of experiments that require a living human brain. Russia's Youth Laboratories Company works on the RYNKL service, which lets us define how aging affects skin and how different medications help fight this effect.
In addition to the above, data and inventions by the giants of machine learning industry are becoming more and more open: recently, on February 15th, the IBM Company opened access to its main component for machine learning that is used by the Watson supercomputer. This will allow other organizations to adapt to the system's capacities for their needs — namely the IBM Machine Learning platform, that was created to decrease the complexity of development and introduction of specialized analytical models.