Within two years, Daniil managed to take part in several scientific, tech, and engineering competitions, go to the US to attend the largest Intel and Google conferences – winning some of the top places there – move from his hometown of Yekaterinburg to St. Petersburg, and enroll in ITMO’s Information Technologies and Programming Faculty through ITMO.STARS contest.
ITMO.NEWS met with the talented first-year student to learn more about his device, attitudes towards people with disabilities, and why he plans to stay in Russia rather than continue his project abroad.
About the project
The idea for this device occurred to me after I learned about one incident covered in the news: social workers took the children away from their hearing-impaired mother because they thought that she wouldn't be able to take good care of them. This story really got to me, and I decided that I have to do something to ensure that such situations would not repeat in the future.
People use various means of communication (verbal and nonverbal, gestural and oral) and I wanted to help them fully understand each other without having to change their usual ways of expressing thoughts. Currently, sign language is one of the most common forms of nonverbal communication and many deaf people use it to communicate with each other. All this prompted me to develop a system that would translate sign language into spoken language.
The first prototype was a system consisting of IR cameras, sensors, and software that identified hand gestures from images. At first, the camera had to be placed on a person’s chest and it was extremely inconvenient. Therefore, I started searching for more effective methods and came across electromyography, which is actively used in modern bionic prostheses.
Electromyography is one of the methods used to study muscle activity and is based on recording their electromechanical impulses. We can install a sensor on an arm or any other limb and receive real-time data on a muscle’s activities. The system makes measurements along the forearm – which is where the muscles responsible for the movements of hands and fingers are – records the data onto a remote server, and then uses artificial intelligence to determine patterns.
The design went through many changes and experiments in order to reduce the device’s size and improve its usability. It is obvious that no one would use a bulky gadget half the size of their arm in real life. My first priority was to make the device more convenient and suitable for everyday use without losing the quality of its functions.
Now, it looks like a pair of elbow pads. These pads track the signals of mechanical activity from the muscles of both arms, then the system analyzes them and determines which gesture is being made by the user’s hands at the moment, and translates these gestures into words. After that, thanks to the neural network, the system puts words together into one complete sentence, displays them on a smartphone screen, and finally reads it aloud.
Presentation in America
I presented this project at two Intel ISEF conferences. The first time happened when I was in the ninth grade and I went there with my first camera-based prototype. One of the judges turned out to be a design engineer who created the set of cameras and sensors I used. When he was in school, he also participated in this conference and that is why now he strongly encourages the use of his inventions in such projects. For my second time, I submitted a brand new device based on body biopotentials and got second place for it. It was very exciting to tell everyone about my project all day long. But on the other hand, I had the chance to hear people’s opinions on my device. These conferences were large-scale: only in my category (Systems Software) there were about a hundred projects. The total number of participants reached 1,500 people.
Last year, I went to America not once, but twice: in the spring it was the Intel ISEF in Phoenix, Arizona, and in the summer – the Google Science Fair in San Francisco. Before the Intel ISEF, I took part in the Baltic Science and Engineering Competition, where, by the way, Anatoly Shalyto, a professor at ITMO University, noticed me. Thanks to him I study here now. As a result of this competition, I was selected for ISEF for the second time.
The Google Science Fair had a different selection system: the first stages were held online, and only 15 best projects were selected for the final round, without any division into categories. The level of organization at both competitions was incredible: the organizers helped me get a visa, bought tickets, and solved my accommodation issues. In general, any issue that arose there was resolved within seconds. And I could not believe my own eyes when I saw such living legends as the creator of the TCP / IP protocol Vint Cerf – who was among the judges.
Work in Russia
At the Google Science Fair, I was among the top five finalists and received the Lego Education Builder Award. But frankly, I never thought of studying abroad. I just do not see the point in it – my project is here and it is still in progress. I believe that as soon as an idea appears, you should immediately try to implement it in the conditions that you have. We are all afraid of difficulties, but we need to learn how to overcome them. It's much more fun than waiting for a better time and a better place.
The fact that in Russia the environment is less accommodating and accepting of people with disabilities and there are fewer opportunities for their rehabilitation is precisely the reason to launch such projects here. It is important to solve the problems we have now rather than to work in comfortable conditions, where many problems are already gone.
I work closely with the Russian Deaf Society. I presented my project to them and they were interested in using such technologies. And it is important for me to get feedback about what people need and how to make the device more convenient for them. For example, this is how the reverse translation function (voice to text) appeared in the project. Thus, communication is complete: a person can not only say something in sign language and be understood but also can instantly receive an answer. This is not a new technology, but no one has ever tried to implement it and make it convenient for deaf people.
Without such recommendations and feedback from future users, this project simply would not make any sense. The technology should serve users, not the other way around.
Coronavirus in sign language
Russian sign language has something like a regularly updated vocabulary, since, like any other language, it is constantly changing and adapting to new realities. For example, the dictionary already has a gesture to denote coronavirus.
The fact that sign language is international is actually a common misconception. All countries communicate differently. In the early stages, my translator was based on the American Sign Language (ASL) as I presented it at an international conference. Now, we are working more with Russian Sign Language. In the future, of course, we plan to add more languages.
Each language requires its own dictionary and set of gestures. And now, that is one of the most challenging issues for me. I collect data from native speakers: they sign words as they “pronounce” them in their daily life, load it into a neural network, and get some kind of a classifier – a common feature that then allows these gestures to be interpreted. I record a lot of data and label it all. On average, one word requires two to three hundred samples from different people.
Study prospects
I have a team and investments needed to successfully bring this project to an end and refine its functions, but I aim at serial production. Thanks to this project, I also got a job offer.
I don’t want to give up on my studies because my initial goal was to fill in the gaps in my knowledge to develop my project further. I very often have to google something to solve problems related to development. I want to possess fundamental knowledge that I can draw from in my work. For example, mathematics is a very difficult science to learn on your own.
I have no problems with combining both work and studies because, at the moment, we are studying online and I can watch recordings of lectures in high-speed mode. The very structure of the courses at ITMO also helps: oftentimes, the tasks have such deadlines that you can quickly complete it and then have a lot of free time. So for now, I manage to combine and we'll see what will happen next. But, of course, there is not enough time for other projects and hobbies.
Attitude towards people with disabilities
I have never treated people with disabilities any differently. For me, they are just ordinary people. The fact that someone uses other means of communication with the world does not in any way characterize the person. But the world creates difficulties for them because no one else uses their method of communication. And I would like to make sure that everyone is on the same wavelength.
Perhaps in the future, I will be able to come up with something for other groups, too. I would like to continue working not only in the social sphere but also on improving the life quality of all people in general. I think that ensuring real equality of physical opportunities for all people will only be our first step. Someday, all the deaf will hear and the blind will see and I believe that this time will definitely come. Then, we can start thinking about how to improve the lives of everyone else.