What are the criteria to trusting scientific information?
If we're talking about information aimed at the general public — like science news and such — one has to look whether there's a source reference, and then understand what this source is. Some aim to make headlines, so even if the source article was from a trusted scientific journal, it can be interpreted in some wrong way — for instance, the significance may be exaggerated or something like this. Surely, it's not always easy to decide if you can trust a particular media. Most know of Cell or The Lancet, but it all gets harder when you deal with middle-level journals. The source must at least be in the leading scientific indexes.
What is the first thing you should look at?
When we talk about scientific bibliographic databases, or indexes, we usually mean Scopus and Web of Science; in Russia that can also include Google Scholar and the Russian Science Citation Index (RSCI). When working with them, it's better if we understand how they work — and, consequently, how trustworthy they are. This is especially important for young scientists.
Andrei Loktev. Source: 5−100.spbstu.ru
Sources indexed by Scopus first are evaluated by its experts: they take into account peer reviewing, authors and editors. Peer review has been a most important instrument for assessing the quality of scientific articles since the 17th century, and is still the best one — though many tried to find an alternative. Peer reviewing ensures certification of research results — what we receive is not just the author's opinion, but a result checked by several independent experts. In all modern scientific journals this process is documented, and one can always check who wrote what, though it still doesn't exclude the possibility of some mistakes.
Unfortunately, metrics based on databases are too trusted. Scientometrical indexes depend on the size of databases, and can be different for the same document indexed by different databases. For example, the citation rate of an article in Scopus can be higher, than in Web of Science, as the first database indexes more sources. So, when we judge based on some index (overall citation rate, Hirsch index, etc.), we have to take into account the source of the data — i.e. which database we took it from. If the works of a scientist were indexed by four separate databases, he can end up having four different Hirsch index values.
How is it calculated?
The Hirsch index can be calculated for any group of articles — for example, it can be counted for publications of a certain author, a group of authors or a whole university. Formally, if f is the function that corresponds to the number of citations for each publication, we compute the h index as follows. First we order the values of f from the largest to the lowest value. Then, we look for the last position in which f is greater than or equal to the position (we call h this position). In practice, we can sort the articles by decreasing of their citation rate, and as soon as their index number is more than the amount of links to it, we go back one position — and here's the Hirsch index. That is the reason behind its popularity: it’s easy to calculate, its integer, it never drops and its quite stable.
But there are several things you should keep in mind. The Hirsch index can be the same for two scientists with different productivity. If one has the Hirsch index of ten, we don't know whether he has written 10 or 210 articles. It does not include one's age and experience, so an experienced scientist with lots of publications will always be at a better advantage than younger ones. Also, it does not show whether the author wrote an article with a high citation rate by himself, or was one of its many authors. Also, it’s affected by traditions of a field of science: for instance, it will generally be lower for mathematicians or humanities scientists than for their colleagues in microbiology.
Are there any mechanisms for aligning results?
There are metrics that allow calculating the citation rates with regard to the field of science. In Scopus, we use the Field-Weighted Citation Impact, and Source Normalized Impact per Paper metrics for evaluating journals. We normalize each citation index by the average result in the particular field of science, and thanks to that we can compare articles and journals from different fields.
Some say that there are fields of science where conferences and personal meetings are more important than research articles. Is it so?
In engineering or computer sciences new results are often announced during conferences. Then, they are published as conference proceedings. These are indexed on par with other scientific publications, so in a sense, they're the same. In Scopus, we take into account such peculiarities and are set on indexing the sources that are essential for a particular field of science.
What are the main trends in modern scientometrics?
One of the interesting developments in scientometrics is introducing metrics that are not directly related to citations. An article can be read, saved, forwarded — that's not exactly a citation, but it also shows that someone's been interested in it. So now we also count how many times a publication was mentioned in news or on Facebook, or saved in a Mendeley personal library. Surely, these indexes can be easily forced up, so we give them only for reference. Still, they can give some insight into the publication's social significance. It's all the more important for articles in humanities, where citation rates are usually lower.
There are new citation-based metrics as well. Those are used to better reflect what's happening in science and science journals in particular. For instance, classical citation indexes of a journal don't tell anything about the probability of an article being cited in it. Usually, in highly specialized journals, articles that were published three-four years ago are being cited at 90−95% probability. Still, in Nature magazine that has a high citation rate, 25% of the publications are not cited at all. And that's a tendency that has been around for many years.
Are there any peculiarities that have to do with publication activity in Russia?
Institutions that regulate science in Russia, including the Ministry of Education and Science, set new requirements according to which a certain number of research results are to be published in journals indexed by international databases. Now authors choose such journals when they are planning to publish their work — and we can already see the results. In 2013, 45 thousand Russian publications were indexed by Scopus, in 2015 there were already more than 60 thousand of them. As of now, more than 400 Russian journals are indexed by Scopus. In only two years, the improvement was spectacular; Russia's leading universities and participants of Project 5−100 greatly contributed to that. Most publications are in classical scientific fields — chemistry, physics, materials science; as for medicine — the field that has most publications worldwide — we are still lacking.
Another peculiarity is that Russian authors can publish themselves in only a few journals. In Europe one can choose from about 22 thousand journals indexed by Scopus, whereas in Russia they only have about 1,500 journals to choose from.
Anything else you'd like to mention?
I also want to note that you have to rely not only on scientometric indexes, but expert opinions as well. To get a more or less accurate picture, you have to rely on at least two indexes. If you compare the contribution of two scientists, you should use not just the Hirsch Index, but also the overall number of publications, the time for which he has been doing research, with regard to his field. One shouldn't compare scientists based on the average impact factor of the journal they were published in. When making comparisons, use indexes from different databases and take into account self-citation, as well. If this approach will be used by everyone, scientometrics' level of credibility will increase as well.