The analytics company Gartner predicts that artificial intelligence technology will become a top priority for many companies by 2020. These days, it is widely employed by large companies and startups; meanwhile, neural network-based apps are all the rage in the App Store. In late July, the photo editing app Teleport was yet another breakout success. The service processes users’ images, changing their background or hair color, seemingly “teleporting” them somewhere else. According to project co-founder Victor Koch, the technology is based on a neutral network that separates the background from the person in the image. To help the software learn this skill, developers had to separate the background from the object in 30,000 images.  Teleport has already received investments for a total of 1 million USD. As of now, the app has been downloaded more than 1.5 million times.

A number of similar photo editing apps have seen similar success before. In the summer of 2016, the Prisma app went viral on the market. According to App Annie analytics, in nine days the app became one of the most popular downloads in Russia, Belarus, Estonia, Moldova, Kyrgyzstan, Uzbekistan, Kazakhstan, Latvia, Armenia and Ukraine. The novelty of Prisma was that instead of applying filters to an image, it used a self-learning server-side neural network to analyze and restructure the image.

This year, the FaceApp selfie-editing service also garnered public attention. According to AppAnnie, it held the number-one spot in Apple’s App Store in 12 countries (particularly in CIS states). It also entered the top-10 in 29 countries (primarily in Europe).


What caused the neural network fad? Before 2010 it was not widely understood just how much the size and depth of a neural network was vital to its effectiveness. Deep network experiments were not plausible due to the limited RAM and poor performance of computers, says David Yang, founder of ABBYY. Today, the circumstances have changed. And as the computational capacities have increased, this technology has spread into all areas of activity, including mass entertainment, where in a short time it managed to gain millions of users. All the media noise caused by neural networks is for the best, thinks Alexander Krainov, head of Yandex’s Office of Computer Vision and Artificial Intelligence. After all, the more people talk about it, the more likely they are to invest in research and startups.

As computer vision technologies develop further, so will more neural network-based services appear, assures Nikolai Davydov, co-founder at Gagarin Capital investment firm, an investor of the Prisma project. He also believes that development of neural network-based apps will become more accessible to developers.

Today, neural networks have already shown great results in banking, finance, retail and even the fossil fuel industries (for example, the Nest Lab startup, based in Ufa, uses machine learning to optimize oil drilling processes). Neural networks are also useful to scientists. Here’s a list of the ways this technology has found use in biology, medicine, ecology, robotics and astronomy:

Medicine and biocomputers

Researchers at Microsoft believe that cancer is similar to a computer virus and, thus, can be fought by cracking its code. The scientists are using artificial intelligence to try and destroy cancer cells. One of their projects involves machine learning and natural language processing – the scientists need them to evaluate the volume of previously collected data when planning a patient’s treatment plan. IBM is also working on a similar project. Using the Watson Oncology software, experts analyze patient health info against research data.

Another Microsoft project uses computer vision in radiation therapy to track tumor growth. The corporation plans to find ways to program the immune system’s cells akin to computer code. According to Jeanette M. Wing, VP of Microsoft Research, computers of the future will be made not only of silicon, but live matter, too, which is why the company is pursuing projects in bioprogramming.

Heart diagnostics

Neural networks are already helping doctors diagnose skin and breast cancer and eye disorders. Now it’s cardiology’s turn. Last month, a research team from Stanford headed by Angrew Ng, a well-known artificial intelligence expert, developed a system that is able to diagnose arrhythmia by analyzing heart rhythm data. When the system is completed, it would relieve doctors of the need to analyze medical data and focus on overseeing the system and the patient’s treatment.

The team from Stanford spent a considerable amount of time on teaching the neural network. The learning process involved real patient data, which the researchers received from their partner company iRhythm, a manufacturer of wearable heartbeat monitors. The network then analyzed 30,000 heartbeat recordings, each lasting 30 seconds, to study the specifics of different form of arrhythmia. Another 300 recordings were used to hone the algorithm’s precision, this time analyzed simultaneously by the system and a team of medical experts.

Other groups also use machine learning to diagnose arrhythmia; for instance, a research team headed by Eric Horvitz, managing director of Microsoft Research, is, too, developing a similar system.


The capacity of a telescope is limited by the size of the lenses and mirrors it uses. By using artificial neural networks, a Swiss research team has overcome that limitation and provided scientists with images of unprecedented quality. In a study published in the Monthly Notices of the Royal Astronomical Society, a research team led by Kevin Schawinski from ETH Zurich has used machine learning to create an algorithm that is able to recognize shapes of galaxies in telescope images and reconstruct grainy or blurred images. The neural network has managed to identify and reconstruct the details that cannot be discerned using a telescope, such as star-forming regions, bars and dust lanes.

Artist's impression of Gaia telescope. Credit:

On June 26, 2017, the results of another research were presented at the European Week of Astronomy and Space Science in Prague. The study used the data gathered by European Space Agency’s Gaia (Global Astrometric Interferometer for Astrophysics) space telescope. In 2013, it was sent on its mission to measure the positions and distance of stars to create a precise 3D catalog of our galaxy. It will inspect approximately a billion space objects – just 1% of the entirety of the Milky Way. Another of its goals is the search for exoplanets. The telescope has so far discovered 80 stars that are moving away from the center of the galaxy towards its edge at high speeds.

Hypervelocity stars are difficult to find; earlier on, scientists managed to discover 20 such stars, with their mass 2.5 to 4 times bigger than that of our Sol. Yet Gaia, with its unique searching capabilities that allow it to take a census of a billion stars, has allowed them to expand the list. The enormous amount of data was processed using an artificial neural network. As Tommaso Marchetti, postgraduate at Leiden University, explains, after its education the network has learned to identify hypervelocity stars in a star catalog similar to the one that is being compiled by Gaia. Marchetti is the author of the article on the discovery that was published in the Royal Astronomical Society’s monthly publication.

Scientists have applied the new algorithm to two million stars and in just an hour the network has narrowed it down to 20,000 potential hypervelocity stars. Further selection processes minimized that number to 80 objects. In six cases, scientists managed to trace the objects’ paths from the galactic core. A group of 20 previously discovered stars was identified, too. They possess mass similar to that of Sol and one has already gained a speed of 500 km/s, meaning that it is no longer tied down by the galaxy’s gravity field and may soon escape its bounds.

A new batch of data from Gaia will arrive in April 2018; until then, the scientists will work on further improving the algorithms.

Teaching robots

Last year researchers from Google’s research-and-development facility X have developed and tested a system that provides accelerated learning in robots performing identical tasks through the use of collective learning. The experiments showed that deep learning methods help robots learn to perform complex tasks, involving those that require fine motor skills. If a simulation or a prepared set of data is available, the learning process is relatively short.

In the experiment, robots had to learn to execute the same task – opening a door on their own. Each robot was connected to a network with a central server that performed the learning process and stored a current version of the neural network. Individual robots also stored a copy of the network which worked on solutions to the task of using the door handle.

During the first test, all of the doors were in varied positions and each robot worked with their own door. The neural network copies in each individual robot developed sequences of actions; the process was intentionally scrambled by the engineers using outside noise with the intention of expanding the pool of possible choices. Then each robot made an attempt to open the door.

Information on the networks’ choice of actions, movements and their results was then sent back to the server and used to improve the main network. Then, a copy of the slightly improved network was once again sent to the robots, and so on.

The testing showed that even two robots are much more effective at learning than one. In two and a half hours, two robots have achieved a 100% success rate. In the same amount of time, a robot that worked on its own had only learned to move its manipulator towards the handle. In four hours, it had managed to open the doors in only 20% of its attempts.

Animal preservation

A neural network has helped scientists from Murdoch University (Australia) keep track of the dugong populations. Dugongs, also known as sea cows, are classified as “vulnerable species” in the International Union for Conservation of Nature’s Red List. For that reason, scientists today are working hard to keep track of their population. Sea cows are encountered in the waters north and west of the Australian coast, where drones help detect them; scientists use the aerial shots to determine their numbers and habitat areas. Until very recently, they had to manually sort through thousands of aerial photos for that purpose. Now this task is done faster and more efficiently using neural networks.

Dugongs. Credit:

Artificial intelligence expert Frederic Maire from the Queensland University of Technology used TensorFlow software to train a neural network to find sea cows on aerial images. The system scans tens of thousands of photos and identifies the creatures. Specialists plan to use this technology for other purposes in the near future. One of the proposed uses for the system is to have it detect sharks near the Australian coast and inform lifeguards of their approach.

Back to top