What is technophobia and what are its attributes?

Technophobia is an inner resistance that arises in us when we think and talk about a new technology. It can take different forms. This can be a persistent prejudice or a negative attitude towards new technologies in general or some specific technology. It also comes in a form of an anxiety connected to the use of technology. As the technology evolves, the object of fear changes. At one point in time, people feared steam engines. And today many people are wary of self-driving cars and blockchains.

Who is most exposed to technophobia? Does social status or rather psychological characteristics of a person matter here?

There are obvious sociodemographic factors that are related to technophobia? Indeed, technophobia is more common among the elderly, those who do not have higher education, and also among women.

The level of technophobia can be related to personal traits: for example, to one’s personal confidence and to how open they are to the new. Introverts are typically more inclined to embrace a new technology. And people who like to plan their lives well in advance, those who have a very high level of responsibility in interaction with others are less likely to accept new technologies.

Da Vinci robot. Credit: www.techcult.ru

Cultural factors are also important here. How much are people ready to wait and endure in order to get results from the new technology? That is, talking of people in a particular society. Individualistic societies are less exposed to technophobia: people in such societies are ready to experiment more, even if that contradicts with common sense.

In addition to that, in more “masculine” societies, people have higher expectations of new technologies because masculinity promotes an orientation towards achievements and ensures victory in competition. For instance, such values are typical for the US and China. Research points to the fear of new technologies being more characteristic of societies with high avoidance of uncertainty. In such societies, people are more comfortable, if there are preset rules, instructions on how to act upon a given situation. And in Russia, the avoidance of uncertainty is one of the highest in the world.

What causes technophobia?

The role of people’s mindset, of this society's vision of the future is great here. So, in 1950’s and 1960’s in the forefront of technological breakthroughs, there was a hope in the society that the development of technologies would solve social issues. Previously, people tended to see the advancement of technology and science as a reason for social optimism. They thought that humanity is on the verge of recovery from wars and will then break through into a better, brighter tomorrow. This optimism can be traced in the fantastic fiction of that time with its space operas and bright images of the future. Now the attitude towards technology is more complex: techno-optimism is combined with catastrophic social pessimism, and social science fiction is painted in fatalistic colors.

Technophobia is also associated with the fear of losing one’s autonomy and control over one's actions. Findings of several surveys at once, including our research and that performed by HSE, point to that. With the support of the Psychology Institute of the Russian Academy of Sciences, we have conducted a survey among the youth of Russia and Kazakhstan. This research showed that young people are interested in electric vehicles and home 3D printers, they are ready to wear any smart sensors of health, use a personal assistant based on artificial intelligence and use genetic diagnostics, but they are against implantable sensors and microchips, even if those can expand human capacities. Most negatively they perceived various invasive technologies that expand people's mental and physical capabilities, neurointerfaces and unborn children’s genome editing. In other words, people are scared of some technologies deciding and doing something for them, while they don’t even think about it.

The attitude towards any new technology depends on whether it interferes with self-identification and morality. That is why biotechnology is perceived with higher sensitivity than nanotechnology. Despite the fact that only slightly more than 50% of Russians can expand the abbreviation of GMO, 80% of the population are against it. At the same time, nanotechnologies, potentially no less harmful, do not scare us at all. More than 50% of Russian people consider them useful, although only 40% of them understand what they really are. The main point here is that biotechnologies have already invaded our zone of self-identification: “Can we even consider ourselves human beings? Have we forfeited our cultural values and traditions?” However, nanotechnologies and digital technologies are yet to come to this point.

So, basically now we are going through the shock phase, as we are starting to realize the technological changes. We are trying to grasp them and sort out our lives.

Yes, while estimating the consequences of new technologies’ introduction to our society, we always face difficulties. That said, I mean social consequences, not technological ones. Take automation for instance. Initially, they thought that it would replace simple physical labor. Ipso facto it turned out that it "expels" educated people from the market: programmers, accountants, even journalists. This means that there are no methods for predicting how a new technology will change social interactions. Technological forecasts hack it, so to say. For example, Jules Verne has described many technological breakthroughs of the 20th century in his novels. I often speak about it at conferences and demonstrate a photograph of a box of chocolates from the early 20th century. In this box cover picture, children are studying in the school of the future. Curiously enough, this picture predicts the Internet: it depicts a machine, where books are loaded, which then converts these data into an analog signal. So, children can then listen to audio-books through the headphones. Only one step leads us from this picture to the brain-computer interface. At the same time, these children sit rigidly one by one. So we can conclude that technical changes were well predicted, but they just couldn't conceive that girls will study together with boys, that children will study while playing, which basically aims at horizontal communication, interaction with each other.

The image in question. Credit: 12rockets.com

Here one can draw an analogy with the effect of self-prophecy. When we try to predict our behavior, we do it less accurately than other people who know us. Because we are focused on the motives and goals that are driving us and are not taking into account our previous behavior, which other people remember better than we do. It means that we need someone else to predict our behavior. Similarly, when we start thinking about new technologies, we are not sensitive enough to the interests of the other social groups that could give us a clue of the possible outcome. For example, participants of the foresight sessions at the World Festival of Youth and Students in Sochi in October this year were inclined to perceive artificial intelligence as an instrument for realizing their own goals and image of the future. At the same time, outside of the scope of their attention lay the fact that other people with other values will interact with artificial intelligence, and all kinds of social forces may take advantage of it.

Why is this happening? Why is it so difficult for us to listen to others?

We live in the society of risk, where people are more concerned about their own protection and security from others, rather than some collective achievements. In such a society, it is very difficult to develop a culture of dialogue and agree on co-decisions. However, it becomes very easy to rally some people against others - strangers - on the basis of their collective fears. When we are articulating the consequences of an action or decision, we are aiming at avoiding any threat. And this, in turn, triggers protective mechanisms, reinforces adherence to previous decisions. So, our horizons inevitably shrink. Being influenced by our own anxiety, we are less inclined to take into account any alternative points of view. We rather focus on what opinion leaders of our own group are saying. And this keeps happening until a person realizes that some new technology can be the solution to a social or personal problem, until they see who can grant them support and who negotiates the rules.

Timofey Nestik

Can we say that there is some kind of socio-psychological polarization that is gradually happening in our society in connection with the technological future? Some people are interested in the advance of science and are accepting new technology, while others stay ignorant and simly use it, or even fear it.

Awareness of new technologies does not in itself create polarization. For example, research shows that the “digital gap” between different generations is gradually narrowing down. The real gap is the following: some use new technologies for self-development, while others use them to unthink. And the latter people are in the majority.

Here lies a deeper problem: the gap between the rapid development of technology and the ability of people to agree on its use and fundamentally trust other people. Technologies are becoming more obscure and require increasing readiness of users to rely on the expertise of other people, their advice and tips, on the state, to some extent. According to surveys conducted by the Edelman TRUST agency, which studies the level of trust people have in different institutions, 75% residents of 28 countries trust new technologies. But in 2017 for the first time their survey showed that on average less than 45% of people trusted their governments, media, business and social institutions. Such low rates transpired for the first time. 

Modern youth tend to exaggerate their digital competence, according to our research, which we are carrying out together with Galina Soldatova, the Head of the Internet Development Fund. Paradoxically, people, having mastered new digital tools, are not always ready to evolve: it is easier for them to surround themselves with familiar programs and devices. So, the gap between people who have problems using new technology, and those who can help them is only growing.

Do you know how technophobes are different from technophiles? They are less willing to discuss the problems they encounter using new technologies with other people, not to mention trolling and protection of personal data. To accept new technologies, one needs to engage in communication with other users, solve problems that way, exchange experience and agree on the terms of use. That is why the more digital technologies develop, the more desired social skills and emotional intelligence are.

Is there any way to bridge this social gap in the society and to prevent its growth?

We must overcome the techno-humanitarian disbalance. Now, with the introduction of new technologies, we focus more on their technical components; and we need to develop communication strategies for users. They are the key which will let the society agree on the rules and embody new technologies in our cities. This derives from the theory of "domestication" of new technologies, even the dangerous ones. For example, when the car was invented, it was a dangerous vehicle. But thanks to the negotiations between manufacturers and interest groups, that is pedestrians, drivers, the state - we came up with the driving regulations. Here is another good example: the first bicycles were created for men. However, interest expressed by women and teenagers has led to a change in the bicycle design. The same thing should happen today, as well: we need to create as many communicative platforms as possible where we will discuss the rules of the game in the neurointerface markets, smart materials, biotechnologies, blockchain and so on. 

And how can an individual overcome technophobia now? What should the developers do to make people more content with new technologies?

Firstly, we need to start applying new technologies not as a gimmicks or causes for national pride, as it is being done now, but rather as some media which can help people solve their specific problems and improve their lives.

Our research indicates that it is important for the users to understand what the benefits of the technology are, how prestigious or legitimate it is, will it be accepted in their social circle, how easy it is to control, whether its developers and sellers can be trusted.

Secondly, risky technologies must have fool-proof security systems, so that their users can be confident that they will not harm themselves and others while using them. For example, some doctors are afraid of using robotic surgeons not because they doubt these devices, but because they can accidentally press a wrong button or rotate the joystick in the wrong direction and so on. Research conducted by my colleague Alexander Oboznov at the Institute of Psychology (RAS) shows that confidence in technology includes the operator's confidence in their own skills.

Credit: goodpractica.tilda.ws

Thirdly, the interface of any technology should give its users a certain right of a choice and adapt to them. Ultimately, the interface of a "smart" device needs to adjust to the level of experience and psychological characteristics of its user. For example, for some users it is more important to experience positive emotions, whereas others need to make sure that it will be easy for them to get expert help in case something goes wrong. An interface can encourage us to be more attentive to the consequences of using some program or device.

So, we basically need to learn to identify the “Force” and the “Dark Side” of technology...

Yes, you are right! Any technology entails both risks and opportunities. Take for instance messengers and social networks. On the one hand, they keep a person in a state of constant stress: decisions on-the-spot must be made, there is no time to think them through. On the other hand, these same technologies expand our ability to connect with a large number of people. 

Now it is important for people to add new opportunities to their lives and make conscious choices while doing so. I will give you an example: some geek who's constantly relying on help from AI assistants, sensors and trackers might think that he has a keen understanding of new technologies, but in fact he is merely following the prompts of his smartphone. After all, he is only using new technological opportunities mindlessly.

It turns out that the technological development only enhances the contrast between those who are ready to critically analyze what is happening and use technologies for maintaining a dialogue with others and with themselves, reflect upon this experience and create something new, and those who are just drifting with the current. The solution to this conundrum shall come from the humanitarian perspective: how can we make sure that when interacting with a new device users would not get “locked in” in an automated mode, but would rather reflect, evaluate different risks - and thus, benefit from the technology? For example, the way it happened with sound recording and cinema: at first it was a plaything, then it became a tool for solving social problems, and finally this technology gained its state-of-the-art status and now helps us understand each other better.

Does this imply that it takes experts of a new generation to organize this social dialogue and come up with the new social, legal and economic norms?

Naturally, as we will have to solve problems that are not yet obvious to us. Soon, for example, the first cases of the hidden psychological impact companies have on consumer behavior will appear in courts. Such psychological profiles can already be obtained on the basis of Big Data analysis and the "digital footprints" that we leave everywhere using “smart” devices. So, it's time to negotiate. And the issue here is not who gets access to these data, but rather how to set the rules of using these data.

The advancement of technology is associated with real risks. Many of them are still perceived as under-the-radar by the general public, but this can in no way negate their possible consequences. For the time being, we are not even seriously talking about the risks of using artificial intelligence in the military field, geoengineering technologies, new types of biological weapons, the ability to track people by their “digital footprints” on the Internet of Things yet. That said, technophobia and unresolved issues of the introduction of new technologies have not yet been seriously used to manipulate any political situation or mobilize public opinion.

The fate of many technologies in the next 10 to 15 years depends on whether we can overcome social pessimism and the lack of trust to the social institutions. This means that we will increasingly need humanitarian and social technologies that can enhance timely detection of technological risks, as well as agree on the rules of life in the digital economy. We are currently developing a methodology for forecasting the social consequences of the new technologies at the Institute of Psychology of the Russian Academy of Sciences. And we will be glad to cooperate with the colleagues at the ITMO University.

Translated by Eugenia Romanova