What's a neural network like?
Actually, one can compare a neural network with a child: born with a completely blank memory, he learns gradually. When learning how to walk, a child watches the adults, and imitates them. First, he falls a lot, but as long as he has a reference, he can understand what he is doing wrong — so he continues trying. Most probably, next time he'll try a different approach, and eventually, he learns to walk without falling. Yet, he is still cumbersome so he learns again. In the end, he elaborates the "ideal" way to walk, and sticks to it.
A neural network does the same, — with the slight difference of it being a program based on algorithms. Also, it's not in a blank state at the start, but has some initial data to work with. This data can be compared with a child's environment, from which he learns. Basically, a neural network does the same. Each node of the network is like a box with lots of entrances and only one exit. The data comes through the entrances, then it's processed, and the result "exits". There can be lots of such "boxes". Each result becomes new entry data for the next set — or layer, as they call them, — until there is only one final result. In essence, that's how data is arranged.
But what's it for, and who defines the properties of the result? Each entry value has particular "weight". In essence, a programmer defines the importance of particular data in regard to the intended result. Also, each box has its own algorithm for elaborating outputs. It's essential for getting one final result. Thus, a neural network resembles a web — and the information goes from the sides to the center.
So, who defines if the result is correct? The neural network has its own reference examples which it tries to simulate. For instance, the network has to recognize a letter from a digital image. It analyzes the "A" letter, but gets "L" as the result. When such errors occur, the network rolls back and starts making mathematical corrections to every preceding layer to get a correct answer as a result. This is called backpropagation, and can be repeated as many times as necessary. It all depends on the amount of initial data the network has. If this data is big — the whole Internet, for instance, — then the network will learn faster and more effectively.
So, how far can this learning process go? Does it necessarily need a human to work out the mistakes? For example, if the network copies a painting of some renowned master, it can define brightness, contours, and colors — to make a similar, but still different painting. How would the system understand that it got something wrong?
"When talking about neural networks "learning", it's not learning with a human teacher. The network learns on multitudes of input and output patterns. To teach a neural network to paint, one has to input photographs for each object in the picture and then the result it has to get. The photo here is the input, and the painting itself will be the result. Surely, there are no photos of Rembrandt's models. This means one has to input fragments — arms, legs, ears, etc. And that will be the role of a human teacher for such network", explains Professor Igor Bessmertny from the Department of Computation Technologies.
Why would Google use a neural network?
As of now, neural networks are applied everywhere. One can upload economic data into such network, and it can predict a crisis. Neural networks can be used for diagnosing diseases, searching for vulnerabilities in information networks, for weather reports — for almost everything that uses digitized data. Neural networks can already recognize text and separate objects on photographs.
And recently, Google presented its new Google Neural Machine Translation system. According to its developers, the system reduced the number of mistakes in a computer-aided translation by 55−80%. To test the service, the developers took articles from Wikipedia in a particular language (English, for instance), then uploaded them into the system and got a result in Spanish. Then they compared it to the article in Wikipedia. The same has been done for news websites.
Formerly, Google Translate split a sentence into words and phrases and translated them separately, comparing them with translations of thousands of official documents of the UN, EU and such. The new system perceives the sentence as a whole, first splitting it into "segments" that are processed by the layers of the network. According to the experts from the Nature journal, this model is closer to the one used for image recognition, where the network first defines areas of similar brightness, then contours, then colors and so on, unless it gets some result.
The new Google translation system was tested on the hardest language pair — English-Chinese, and got 60% less mistakes in comparison to other systems. In the nearest months, the system will start working with other language pairs, as well.
The most essential problem in using a neural network for computer-aided translation is whether it can take account of the context. If a program learns to define the topical area — legislation, for instance — it can use only the related words and thus avoid many mistakes.
"Teaching a network to define the context will be hard — that's no easy task even for a human. Actually, sometimes it’s impossible to find a correct translation even for a separate sentence. One can only translate a text as a whole — and only the author can do it right. No one can be sure what did Tolstoy mean by "mir" in the name of his masterpiece — "peace", as opposed to war, or "world" as the whole of the universe (in Russian, the word is the same for both -- Ed.) Probably, he meant both, so there's no direct translation to this.
All in all, a neural network is like a Swiss army knife — something that's multipurpose but bad at every of its tasks. As for tasks, neural networks are worst at semiotics, or symbolic computations. For the word to be processed, it has to be represented as a number or a vector, and similar words are to have similar values. Thus, when using a traditional approach to coding, "Cheburashka" (a cartoon character -- Ed.) will be a cognate for "chebureki" (a type of food) and "Cheboksari" (a city). Surely, a network can learn to work with such words by using the above-mentioned backpropagation — but changing the order of words or pluralizing them will result in mistakes still", comments Igor Bessmertny.
Future
Neural networks is a type of artificial intelligence, they already can recognize and analyze, predict, find analogies and detect problems. Still, it will take years for the artificial intelligence systems to attain a level of perception that is close to human.
As of now, neural networks are just programs that make mistakes no human would ever make. Last year, the Google Photos app recognized two afro-americans on a photo as "gorillas". An angered user posted a screenshot of the app in Twitter and that was some scandal.
Anyways, the capacity and operation of neural networks will depend on the users' goals or needs of the end users. If the program is for entertainment, then some mistakes are excusable. Still, if it’s for purposes like conducting blood tests, they are certainly not.
So, the question is: is there any point in creating a neural network that will imitate a human brain completely? Or it’s much more efficient to adjust simpler programs to particular tasks?
"I want to quote professor Preobrajenski from the "Heart of a Dog" (a novel by Bulgakov -- Ed.): "Why create an artificial human, if any woman can give birth to one in 9 months?". To create a silicon copy of a brain one would require thousands of servers and gigawatts of power, whereas a human brain weights about a kilo and has 25 W power consumption. AI is already here, and it is successful at solving particular tasks — parallel parking is challenging for humans, but is easy for any computer-aided car", concludes Igor Bessmertny.