Specialists of AI usually face such a statement — and some of them support this idea — as "It is impossible to create AI until one comprehends natural intelligence." For instance, over 40 years ago one famous scientist declared at a large conference on neurophysiology: "Just imagine that one has left a computer on Mars, then the Martians start studying this using the same methods as we use while studying a human brain."

At that time, it was just a theoretic experiment, but this year some physiologists in alliance with computer science experts published an article, which in fact, comprised this idea. Obviously, they didn’t involve a Martian to do that. They used an emulator for Atari and ran a certain game (for instance, Space Invaders). Then they used modern neurophysiological methods to study which program was used by a processor while running. They came to the point that even modern techniques don’t help in understanding how a simple processor works, so let alone the brain.

Do we really want AI to copy natural intelligence?

Let’s compare natural intelligence with photosynthesis and AI with a solar battery. Does the battery create energy through photosynthesis? In terms of its definition, it doesn’t. But we don’t really need it, as the only necessary thing is to convert the energy into another useful form. Solar batteries manage to do that.

ITMO University Alexey Potapov

It raises the following question: in what sense should we understand intelligence? If we consider it as a biological process with some chemical reactions, it means that it is impossible to create an intelligence based on computers. But, in fact, we don’t really need it.

When we use this biological approach, we simplify the concept of natural intelligence — some of its processes and peculiarities are not relevant for AI. For instance, humans may have limited short-term memory. Do we need to copy this? It seems that we don’t. This simplifying makes us create irrelevant models instead of working ones.

So what kind of AI do we need?

The most important thing is to understand what exactly we are going to create. One more example is AlphaGo, a program which beat Li Sedol, a guru of Go game that was considered as extremely difficult for a computer during a long period.

Li Sedol Credit: newyorker.com

Look at this picture. There is a human who feels like he loses. Why do we think that AlphaGo cannot be a full-fledged AI? Because it doesn’t enjoy the win or doesn’t sympathize with a human. I think that machines that have feelings and consciousness is not what we are looking for.

Here are some systems that already beat people at complicated games like Go, Jeopardy, table tennis and also those who can drive and do other things better than humans. What are their disadvantages? The main one is that each system can solve only one task. It means that when we need to solve some other, we have to create a new system. Now imagine that you have a computer that looks for solutions for various tasks. If we had it, we would say that AI has been finally invented.

How to make a machine reflect?

This question exists as long as studies of AI have existed. In the early days of this field when heuristic programming was the most promising technology, one thought that it wasn’t AI in the full sense of this word because computers could operate a limited amount of tasks.

ITMO University "Gutenberg's Lounge"

It was offered to create a general technology for solving all types of tasks and other solutions. However, the enthusiasm went down — until recently, it was unpopular to admit the possibility to create a thinking machine. The situation has changed over the last ten years when the idea of General AI appeared. Its main goal is to create such systems that can solve all kinds of tasks regardless of the imperfection of solutions.

As of now, there are many tests assessing the level of general AI. For these purposes one develops various frameworks, sets of environments and so forth. For instance, General Game Playing contests that make a computer play some new games for the first time, or automated machine learning systems that can "train" so as to solve unknown tasks.

The main trends

Such technologies as neural networks including deep learning algorithms that are now used for various purposes, correlate with the concept of General AI. For instance, differentiable neural computers that are a new methodology of data processing, which allows a computer to study using only its memory.

As of now, such field as meta learning is becoming widespread. Unlike deep-learning neural networks that require certain architecture and can solve a limited range of tasks, meta learning adopts using one neural network that teaches another one.

As for applied AI, there is also probabilistic programming. Here one can create a system able to solve any task. For instance, theoretically it is possible to develop a model describing any environment. The only thing one has to do is make it work effectively.

Having analyzed all breakthroughs in this field, one can mention that the latest tendency is to synthesize various solutions from different fields. For instance, the deep learning source TensorFlow has a probabilistic programming plugin, which gives an opportunity to create various models. On the contrary, systems created using probabilistic programming can compile a probabilistic program in a deep-learning network.

Credit: depositphotos.com

AI: legitimate or not?

There are various predictions related to the future of deep-learning AI. The main point is to understand what is its' evolution peak? If we consider it as creation of general AI, forecasts say that the aim will be reached in 2020−2050"s. But if we talk about a self-learning AI, it means that the first achievements will be in 2020"s.

As for a legal status of robots, the European Union already discusses this issue. It may seem ridiculous, but the following example shows how important this problem is. Each member of a business company has his responsibilities and benefits — this makes the business interactions clear. What if an artificial agent creates a new method? Who will be a right holder of the method — its developer, owner or the AI as it is a legal body? There is the question.

According to a report on legal affairs by the European Parliament, AI will have overtaken humans over several decades. It is also said that research works in the field of robotics have to correlate with human rights and interests. It is planned to create a European Agency for robotics and artificial intelligence in order to provide technical, ethical and regulatory expertise. Furthermore, they are going to establish legal responsibility in the field of robotics.