Introduction

In the situation of the innovative development of all the spheres of human activity an increasingly greater role is played by the integration (social cooperation) of science, education and business, as well as the social and economic partnership of enterprises and scientific and/or educational institutions (Bayburtyan 2014; Pshunetlev 2014; Balykhin and Generalova 2015; Christiansen 2000; Jonash and Sommerlatte 1994). This requires effective research activity, not only that of individual scientific workers, but primarily that of scientific teams. The volume and complexity of modern research tasks have achieved such a level that their successful solution is achievable only on the condition of system (coordinated) activity of scientific teams, their interaction with other teams and the scientific community in general (Kravchenko and Salygin 2015; Kincharova and Sokolov 2015; Kovshov and Kovshova 2015; Guseltseva 2014). The cooperation (interaction, collaboration) of scientific teams can also be observed, being caused by the integration of scientific knowledge and the necessity of conducting the research at the boundaries of various sciences. It is becoming more and more evident that obtaining the scientific results, significant for the innovative development of society (spheres of human activity) is possible only if based on the system activity of scientific teams, but not on that of individual workers (Balykhin and Generalova 2015; Kincharova and Sokolov 2015; Lazarev and Eliseeva 2015). Carrying out financed research projects is becoming more and more widespread. The analysis of scientific literature has shown that the significance of research activity performed by a scientific team (scientific organization) can be considered as one of the main indicators of its social activity, i.e. of its socially oriented productive potential and the expansion of the given social system into the environment, i.e. the social megaenvironment (Loyko et al. 2015a; Kincharova and Sokolov 2015; Gavrilova et al. 2015; Yurkina 2014; Yasvin 2001).

For modern specialists it is clear that the task of stimulating scientific workers and teams to the systematic and productive research activity is solved on the basis of using scientometric indicators (Loyko et al. 2015a; Lutsenko 2015; Tsyganov 2013; Hirsch 2005). The necessary conditions for adequate evaluation of scientific workers’ and teams’ research activity have been created due to the active development of international and national scientometric databases (systems) (Egghe 2008; Efendiev et al. 2015). It has been made possible because the methods of processing the primary monitoring information about the research activity results can be formalized and realized by means of computers (Lebedeva 2015; Holland 1994; Koza 1992; Dudina 2015; Popova et al. 2015b). However, the methods of evaluating the significance of the research results achieved by scientific teams for scientific communities have not been properly elaborated yet.

The problem field of this research is raising the quality and productivity (efficiency) of the research activity performed by scientific teams. The research problem is the question of how the true significance that the research activity performed by scientific teams has for the scientific community can be objectively and multilaterally evaluated. The research objective is the elaboration of new criteria for evaluating the research activity of a scientific team. The object of research is the research activity of a scientific team, and the subject of research is the significance of the research activity of a scientific team for the scientific community.

According to the present-day views, effective social management, including scientific activity management at the institutions of higher education, is impossible without monitoring as its information mechanism (Zyryanov et al. 2014; Bayburtyan 2014; Mukhin and Orlov 2014; Shevchenko 2015; Geidarov 2015). It is known that the complex objective monitoring must be based on the multiparametric evaluation; otherwise the absolutisation of the selected indicators, i.e. the overestimation of their role, is inevitable (Petkov and Romanov 2015; Maslak 2006; Tolstova and Voronina 2015). With regard to monitoring the research activity of both scientific workers and scientific teams, the overstated role of the h-index has led to forming such a tendency as the attempts of its artificial increasing, e.g. by means of ungrounded self-citations (Lutsenko 2015; Kincharova and Sokolov 2015). In the modern world, the task of struggling with artificial “improvement” of scientometric indicators (the struggle with ungrounded self-citations being a separate component of the above-stated task) is an actual one (Mukhin and Orlov 2014). There evidently arises such a metrological (scientometric) problem as the selection and justification of the scientometric parameters reflecting the true significance of the research activity of scientific workers and teams for the scientific community (Lutsenko 2015; Tsyganov 2013; Popova et al. 2015c).

For the authors’ team it is also evident that the task of objective evaluation of the research activity of scientific teams has to be analyzed in the context of a bigger task—the evaluation of competitiveness of educational environments (the microenvironments of departments, the mesoenvironments of faculties, and the macroenvironments of the institutions of higher education). For example, the Shanghai ranking methodology for educational environments places at its core the indicators connected with research activity (Zalibekova 2014; Shevchenko 2015; Lutsenko 2015).

It should also be noted that the Shanghai methodology makes it possible to evaluate not only the ranking of the scientific and educational macroenvironments of the institutions of higher education, but also the ranking of the mesoenvironments of faculties and the microenvironments of departments.

Nowadays the two most important scientometric indicators of a scientific organization (a social system!) are its h-index (Hirsch index) and i-index (Eck and Waltman 2008; Hirsch 2005; Egghe 2008; Romanov et al. 2015). It is known that the i-index of a scientific organization equals H if not less than H members (scientific workers) of the organization possess an individual h-index not lower than H each (Tsyganov 2013; Franceschini et al. 2010; Egghe 2008). From the authors’ point of view, this index reflects the sociocultural (scientific) potential of an organization or a scientific team rather than the significance of its research activity for the scientific community. One more universally recognized indicator is the h-index of a scientific organization: it equals h if not less than h publications by the organization obtain not less than h references (citations) each (Tsyganov 2013; Franceschini et al. 2010; Egghe 2008). This indicator is fundamentally limited by the number of publications. Unlike the h-index, the i-index is fundamentally limited by the number of people included into the scientific team (it should be noted that the h-index is only indirectly limited by the number of people in the team). And the productivity of the research activity of a scientific team depends not only and not that much on the number of people, but rather on the rational organization of its activity and its effective management (Shevchenko 2015; Bayburtyan 2014; Christiansen 2000; Lazarev and Eliseeva 2015). It should be noted that both of the indicators could be evaluated (calculated) by applying the well-known statistic method of the scree plot (Eck and Waltman 2008; Hirsch 2005; Egghe 2008; Lutsenko 2015; Guns and Rousseau 2009).

It should also not be forgotten that a scientific team is a social system, which is not limited to a “sum” of its individual scientific workers (Lazarev and Eliseeva 2015; Kovshov and Kovshova 2015; Covey 2016). The universally accepted indicator surely has an important humanistic significance: unlike the indicators including “numerators and denominators,” it does not “stimulate” the management to discharge less productive scientific workers, but shows them the objectives for professional growth. The society (including the scientific and educational environment) is a complex system, subjected to its objectives and tasks, and also characterized, apart from its components (i.e., in the context of the article, scientific workers), by social ties and interactions between them. The evaluation of the true significance (for the scientific community) of the research activity of scientific teams for the scientific community is a more difficult metrological task than for a scientific worker; it is conditioned at least by the fact that in scientific teams, beside the self-citations, there is an effect of cross-citations, i.e. one member of the scientific team provides references to the publications by another one (Lutsenko 2015; Tsyganov 2013; Popova et al. 2015a). So the question about the true significance of a creative (scientific) team for the upper-level social system still remains open.

Methods

The methodological basis of the research is formed by the system approach (viewing the science as a social institution having unbreakable ties with the society in general), the metasystem approach (viewing the results of scientific activity as a metasystem, i.e. the system with relatively independent components), the probabilistic-statistical approach (viewing the research activity as a probabilistic process), the sociological approach (viewing a scientific team as a social system and the significance of results provided by the team’s research activity as one of its social activity indicators), and the qualimetric approach (viewing the significance of a scientific team’s research activity as a latent variable reflected by a number of quantitative criteria).

For achieving the research objective the following complementary methods of research were applied: the analysis of the problem situation, the analysis of the scientific literature and the best practices of research activity management at the institutions of higher education (benchmarking), the cognitive, structural–functional and mathematical modelling, the methods of graph, set and relation theory, the methods of qualimetry (the theory of latent variables), the methods of probability theory and mathematical statistics, the method of expert evaluations, the methods of the theory of limits (Andrich 2001; Maslak 2006; Lebedeva 2015; Petkov and Romanov 2015; Zopounidis 2002; Popova et al. 2017).

The role of mathematical methods in our research should be particularly noted. The basis for the methods of qualimetry is the theory of latent variables. The significance of the research activity of a scientific team is treated as an integral indicator (a latent variable), for which the particular criteria (indicator variables) can be identified. The methods of set, graph and relation theory make it possible to create the cognitive models of the research activity of creative (scientific) teams and of their interaction with other teams and the scientific community (scientific megaenvironment) in general. Due to applying the mathematical set theory it is also possible to distinguish the truly external references (citations) to the publications of a scientific team, i.e. the references which are neither self-citations nor cross-references (the references made to each other by the members of the same scientific team). One of the members of the authors’ team has earlier elaborated the algorithmic method of struggling with the artificial improvement of monitoring indicators, based on the theory of limits (Petkov and Romanov 2015; Loyko et al. 2015b; Popova et al. 2015a). The application of this method in our research made it possible to diminish the role of self-citations and cross-citations in the evaluation of scientific teams’ research productivity.

Within the article let us define the notion of the “cross-references within a scientific team”, by which we shall mean citing the publications of some team members by the other ones. It is evident that the cross-references, though reflecting the significance of the research activity of some team members for the other ones (i.e. the social activity of certain team members), do not reflect the significance of the scientific team’s research activity for the scientific community (Efendiev et al. 2015; Romanov et al. 2015; Lazarev and Eliseeva 2015; Tsyganov 2013).

The correlation analysis (calculating the correlation coefficients) was used to prove that the modified h-index (the authors’ parameter) differs from the traditional one. Alongside with calculating the correlation coefficients, the prevailing statistic method of our research was the well-known scree plot method (Eck and Waltman 2008; Hirsch 2005; Egghe 2008; Lutsenko 2015; Guns and Rousseau 2009), which was applied for evaluation of both the traditional indicators (the i-index and the h-index) and the authors’ ones.

The research was carried out at the institutions of higher education and research institutes situated in Krasnodar Krai of Russia. New criteria for evaluating the significance of the research activity performed by scientific teams (for the scientific community) were identified by the authors due to applying the methods of qualimetry, whose scientific basis is the theory of latent variables (Andrich 2001; Maslak 2006; Lehmann et al. 2006; Dudina 2015).

Through the Russian Science Citation Index (having the web-site elibrary.ru as its technological platform) the primary data about the results of research activity carried out by the scientific and educational workers of the institutions of higher education and the scientific workers of the research institutes located in Krasnodar Krai were obtained. The work with the above-mentioned technological platform makes it possible to obtain the number of the scientific workers registered in the Russian Science Citation Index (RSCI), the i-index of the organization, the h-index of the organization and other consolidated parameters of the organization. It is also possible to get the detailed information, namely the information about publications and the references (citations) to them. The initial information about the cited and citing publications (the citing publications being the references) includes not only the names of the publications themselves, as well as the list of authors (including their code numbers in RSCI), but also the authors’ workplace (the region of Russia, the city, the name of the scientific organization and its structural subdivisions: faculties or departments). A reference to a publication was considered a cross-reference in case the intersection of the set of workplaces of the authors of the cited and citing publication was not an empty set. So all the calculations of the authors’ indicators were based on the data (initial information) from RSCI, and no additional data from other sources were used.

The research base was provided by Kuban State Technological University (KubSTU), Trubilin Kuban State Agrarian University (KubSAU), Kuban State University (KubSU), Kuban State Medical University (KubSMU), Kuban State University of Physical Education, Sports and Tourism (KubSUPEST), Pustovoit All-Russian Research Institute of Oil Crops (ARRIOC), and North Caucasian Regional Research Institute of Horticulture and Viticulture (NCRRIHV); the basic data on these scientific organizations are presented in Table 1 (notes: NA is the number of scientific workers, n′ and n″ are the numbers of mesoenvironments and microenvironments respectively, X′ is the number of publications in RSCI, X″ is the number of publications in Scopus; the 10-year h-index is given for the time span from 2007 to 2016 included). The common aspects of the given scientific organizations are the presence of postgraduate courses (i.e. the opportunities for training of scientists), the presence of their publications in the international scientometric systems, and the absence of workers or graduates awarded the Nobel Prize (one of the success parameters, according to the Shanghai methodology).

Table 1 Basic data on the scientific organizations of Krasnodar Krai

Within the framework of the research, the departments of higher educational institutions were considered scientific microenvironments, the faculties of higher educational institutions and the subdivisions of research institutes were considered mesoenvironments, the higher educational institutions and the research institutes were considered macroenvironments, while the scientific community of Russia was considered megaenvironment. Within the limits of this research the authors are primarily interested in the indicators reflecting the significance of the results provided by the research activity of micro-, meso- and macroteams for the scientific community (scientific megaenvironment).

Authors’ indicators

From the authors’ point of view, in order to evaluate the significance of scientific teams for the scientific community it is necessary to apply the qualimetric approach requiring the selection and application of multiple criteria (and not just one). Allocating the authors’ indicators is impossible without the simplest mathematical models of research activity, which are based on the theory of the multitude. Let S be the multitude of a scientific and educational team members, \( s = P\left( S \right) \) be the capacity of the multitude (the number of its members, P being the symbol of the capacity of the multitude), Z be the multitude of publications involving the team members (\( z = P\left( Z \right) \) being the capacity of the multitude, i.e. the number of publications), W be the multitude of references (citations) to the publications authored by the team members (\( w = P\left( W \right) \) being the capacity of the multitude, i.e. the number of citations of the publications by the scientific team). It is evident that \( Z = \bigcup\nolimits_{a = 1}^{s} {Z_{a} } \), and \( W = \bigcup\nolimits_{b = 1}^{z} {W_{b} } \), where U is the symbol of uniting the multitudes, Z a is the multitude of publications by the a-th team member (scientific worker), W b is the multitude of citations obtained by the b-th publication. The multitude of all publication authors participating in the whole multitude of publications, Z is \( A = \bigcup\nolimits_{b = 1}^{z} {A_{b} } = S\bigcup Q \), where A b is the multitude of authors of the b-th publication, Q is the multitude of authors, not included into the analysed scientific team, \( q = P\left( Q \right) \) is the number of scientific workers who the given team collaborates with.

The multitude Q evidently depends on the hierarchy level of the analysed social system (Shevchenko 2015; Loyko et al. 2015a). For example, if the authors of a publication are members of Department A and Department B of the same university, the member of Department B is an external member for the microteam of Department A, but an internal member for the macroteam of the educational institution.

While building the primary mathematical models of research activity the authors took into consideration the fact that the evaluation of the citation-based indicators of team activity is much more complicated than that of an individual scientific worker. This complexity results primarily from the close ties between the team members. For example, what is self-citation for a team? And how can we distinguish the “really external” citations from the citations to the team publications provided by the authors, who are not the team members themselves, still being the co-authors of the team members?

As defined above, S is the multitude of the members of the analyzed team, Q is the multitude of their co-authors (external for the team), and let D be the multitude of authors, having made the references to the team members’ publications. Then the coefficient of social significance (for the scientific megaenvironment) of the scientific activity results provided by the analyzed team is \( \lambda = P\left[ {D - \left( {S\bigcup Q } \right)} \right] \), where P is the capacity of the multitude, the argument is the multitude of scientific workers having quoted the articles by the team, being neither its members nor their co-authors, and U is the symbol of uniting the multitudes. In other words, λ is the number of really external scientific workers who have cited the publications by the team.

The presence of the primary mathematical models of research activity (including its results) makes it possible for the authors of the article to suggest a number of indicators reflecting the significance of the research activity of an analyzed scientific team for a social system of a higher rank.

Let us introduce an indicator, alternative to the well-known i-index of a scientific team: \( I^{\prime } = \frac{{\sum\nolimits_{d = 1}^{I} {L_{d} } }}{I} \), where I is the i-index of the team, and L d is the individual h-index of its d-th member (with only I of the most productive team members being taken into account). This indicator takes into consideration the “extra” potential of the scientific team members, possesses the sufficient differentiating ability and is principally not limited by the number of team members (scientific workers). From the authors’ point of view, it is also necessary to take into account the empirical average h-index of all the team members (scientific workers): if it differs significantly from \( I^{\prime } \), it is necessary to stimulate the “backward” scientific workers to improve their professional growth.

The next parameter is the index of demand for the arsenal of the highest quality publications (the “materialized” results of the team members’ research activity): \( R = \frac{{\sum\nolimits_{e = 1}^{H} {r_{e}}}}{H} \), where H is the h-index of the scientific team (i.e. the number of the most cited publications), r e is the citation rate of the e-th publication. The indicator stated above, unlike the well-known h-index, reflects the “extra quality” of the most cited publications by the scientific team. Such publications evidently serve as models for the members of the analysed team.

The next indicator is the index of the geographical latitude of references to the team members’ publications: it equals F if not less than F regions of a federal state provide not less than F references to the publications by the team members (for international scientometric databases the references to publications provided from various countries are actual). A typical example is the fact that the publications by the Russian Academy of Education members are well-known and demanded in all the regions of Russia as well as abroad. Another example is that the scientific publications on the tolerance problems by the scientific and educational workers of the faculty of psychology of Lomonosov Moscow State University are widely known in Russia. The third example is that the works by D. Holland and his disciples on the genetic algorithms or evolutional calculations (from the field of the artificial intellect) are cited all over the world. Isn’t it a definite proof of the recognition of the scientific works? The coefficient of the geographical latitude of references is the number of regions f (the capacity of their multitude), which have provided references to the publications by the members of the analysed team. The geographical latitude index will make it possible to identify the regions with the greatest demand for the publications by the members of the analysed team.

It is really hard to “improve” the above-stated indicator by means of the fraud schemes. Moreover, if a publication is cited all over the world, it means its recognition by the wide scientific community. And if a publication is cited only in the author’s native town (or, which is even worse, only in his or her “native” scientific organization), it isn’t an evidence of its recognition. The wide geography of references is the evidence of the publications’ recognition by the wide (and not narrow) scientific community. It should be reminded once again that we are speaking of the influence of scientific workers and teams on the wide scientific community, i.e. on the scientific megaenvironment.

The above-stated indicator F objectively reflects the significance of the team’s research activity for the scientific community. Alongside with the difficulty of its artificial “improvement,” the index of the geographical latitude of references has one more advantage as an indicator: it is not limited directly by the number of publications.

From the authors’ point of view, the new parameter—the coefficient of the geographical latitude of references—needs normalization, because various countries can include different numbers of their subjects (regions, states, lands etc.). Then the normalized parameter is calculated according to the formula \( f^{\prime} = \frac{f}{\wp} \), where \( \wp \) is the number of the federal state subjects (for Russia it is 85). The normalized parameter reflects the degree of a scientific team’s research activity significance for the entire federal state. At the same time, the index of the geographical latitude (parameter F) does not need normalization, because the value of F is non-limited and non-linear; the normalized indicator F would create a distorted impression (of the significance that the results of research activity performed by the compared scientific teams have for the scientific community) while comparing the two values.

There is a question: “How can the above-stated parameters be identified by means of modern information (computer) technology?” The algorithm for identifying the parameters F and f is as follows.

  • Step 1 Define the multitude of references (citations) to the publications by a scientific team (national scientometric systems make it possible to perform this operation), i.e. the multitude of publications having cited the publications by the analyzed scientific team.

  • Step 2 Build a table of two columns, the first column listing the regions of a federal state, and the second column—the number of citations from these regions (at the initial stage of the work of the algorithm it equals zero). The number of the lines in the table coincides with the number of the regions.

  • Step 3 For each citing (not cited) publication define the multitude of its authors (according to a scientometric database) as well as the multitude of the regions of their residence (or work). If more than one author of a citing publication live (work) in the same region, it is taken into account only once (during the analysis of a definite citing publication). The number written in the table at the intersection of the corresponding line and column is increased by 1. The operation corresponding to the third step of the introduced algorithm is a complex (not simple) information process. For each cited publication it requires detailed analysis of each citing publication (i.e. references). Nevertheless, hypertext technology makes it possible to speed up the performance of such analysis. In case of providing a hyperlink (in RSCI) to a citing publication, its authors (including their code numbers) as well as their scientific organizations and their geographical location (settlements with their code numbers and regions) are shown. If all the authors of a citing publication work in the same settlement (town or city), it is shown once; if different authors of a citing publication work in the same settlement, each author’s places of work are shown. The performance of the third step is significantly speeded up in two cases: first, if all the authors of a citing publication work in the same settlement; second, if the settlements, where the authors of a citing publication work, have already been taken into account in the analysis of significance of a cited publication (i.e. were discovered during the analysis of the preceding publications, citing a definite cited publication).

  • Step 4 After completing the analysis of the citing publications sort the regions in order of diminishing the number of citations obtained from them.

  • Step 5 The coefficient of the geographical latitude is the number of the regions, which have provided not less than one citation (i.e. not the zero number of citations). The index of the geographical latitude is defined by the scree plot method, i.e. it is considered equal to the number of the region (in the sorted table), for which the number of citations (obtained from it) is not smaller than this number.

The algorithm described above can be significantly simplified, which is possible due to the broad functional opportunities provided by RSCI (to say more exactly, by its technological platform—the web-site eLIBRARY.ru). For this purpose, in the computer program (the web-site software) the analyzed scientific team (higher educational institution, faculty or department) is chosen, the time interval is set, the citations to the publications by this team (apart from the citing publications themselves) and then the list of organizations where the authors of the citing publications work (and they are affiliated with a definite geographical settlement) are selected.

Let us provide an example. For a certain scientific team the following number of citations from regions has been defined (sorted following the diminishment of the citation number): 353, 27, 23, 22, 19, 15, 15, 12, 12, 11, 10, 9, 8, 8, 5, 4, 2, 2, 1, 0, 0 etc. It is evident that the coefficient of the geographical latitude of references (citations) to the publications by the team equals 19, and the index of the geographical latitude is 10 (as only 10 citations have been obtained from the 11th region, the last region taken into account is the 10th one).

The index and coefficient of the citation rate provided for the team’s publications by authors, editions and organizations (both scientific and scientific-educational and belonging to other types) are defined in a similar way. For example, the index of the citation rate provided for the analysed team’s publications by editions equals F‴ if not less than F‴ editions have provided not less than F‴ references from each edition (taking into consideration all the arsenal of publications by the team members). Unlike the index of the geographical latitude of references, these indicators do not need normalization, because the number of scientific workers, publications and organizations is always great enough.

The question arising in this connection is whether such criteria are applicable to an individual scientific worker (noting the fact that the potential of a team largely depends on the potential of its members). They surely are, but for achieving the high rates of the indicators listed above one needs to be a renowned specialist or even an outstanding (famous) scientist. At the same time, the larger the creative team (the scientific microenvironment), the larger the publication arsenal—and, therefore, the higher the probability of achieving the high rates of the above-mentioned indicators is. We should also not ignore the fact that nowadays there is a stable tendency of publishing articles in collaboration and achieve significant scientific results while working on research projects in the same way (which is connected with the increasing complexity of scientific research). For example, the financed projects (grants) are normally carried out collectively (Balykhin and Generalova 2015; Covey 2016; Zalibekova 2014; Kincharova and Sokolov 2015).

Calculating the impact factor is as appropriate for the author team as it is for scientific editions (journals). A productive team should evidently influence the scientific community—the scientific and educational megaenvironment (here it should be reminded that, apart from the h-index, an average citation number per publication is calculated for an individual scientific worker). The impact factor of a scientific team is the relation of the number of citations obtained by the scientific team’s publications to the number of those publications (this indicator can be calculated for various time periods): \( IF = \frac{w}{z} \). The given indicator should be calculated for a 2-year period, i.e. as for scientific journals (Efendiev et al. 2015; Loyko et al. 2015b).

Let us remind once again that the results of the research team’s scientific activity should be significant for a wider scientific community. If L is the number of scientific workers registered in a scientometric database (for the Russian Science Citation Index it equals 798,970) and λ represents the number of really external scientific workers as defined above, the latitude of social recognition of the results of the scientific team’s research activity is \( \varsigma = \frac{\lambda}{L} \).

The index of the scientific team’s research productivity for the external scientific environment, which is actually the number of really external citations, is \( \psi = \left({\psi^{\prime}} \right) = P\left({\bigcup\nolimits_{j = 1}^{\lambda} {\sigma_{j}}} \right) \), where \( \sigma_{j} \) is the multitude of citations made by the j-th “really external” scientific worker to the publications by the members of the analyzed team, ψ′ is the multitude of “really external” citations to the scientific team’s publications. It is clear that the ψ′ multitude includes neither the self-citations of the scientific team’s member, nor the cross-references, nor the citations by co-authors. The above-stated parameter reflects the real significance of the research activity of the scientific team for the wide scientific community.

The indicator suggested above needs to be evaluated taking into consideration the factor of time: one and the same number of the “really external” citations can be obtained in different intervals of time, which means the different significance of the scientific team’s research activity results: \( \psi_{T} = \frac{\psi }{T} \), where T is the time interval (in years), during which the citations were obtained. The suggested indicator is the speed (tempo) of obtaining the social recognition of the scientific team’s research activity results.

Unfortunately, the parameters R and I′, which have been presented in this article earlier, can be subjected to artificial “improvement” by means of unjustified self-citations, citations by co-authors or cross-self-citations (within a scientific team). That is why the authors suggest a modified h-index of a scientific team, which is calculated in the following way. Let us define the authors’ index of the publication’s number of citations: \( I^{\prime \prime } = N_{1} + \sum\nolimits_{k = 1}^{{N_{2} }} {0.8^{k} } + \sum\nolimits_{l = 1}^{{N_{3} }} {0.6^{l} } + \sum\nolimits_{m = 1}^{{N_{4} }} {0.4^{m} } + \sum\nolimits_{o = 1}^{{N_{5} }} {0.2^{o} } \). Here N 1 is the number of “really external” citations for the publication (if there are neither members of the analysed scientific team nor their co-authors among the authors of the citing publications), N 2 is the number of external citations for a publication (if the authors of the citing publications are neither members of the analysed team nor the cited publication authors external to the team), N 3 is the number of citations for the publication provided by the scientific workers, who do not belong to the analysed team, still being the co-authors of the given publication, N 4 is the number of citations for the publication provided by the members of the analysed team who are not the co-authors of the publication, N 5 is the number of citations for the publication provided by the members of the analysed team who are the authors of the publication (or the number of self-citations). The difference between the “really external” and external citations is as follows. Any of the members of the analyzed scientific team can have social ties with other scientific teams (having their co-authors in the scientometric system, who refer to other scientific teams). If the calculation of the “really external” citations excludes the references to the publication provided by all the co-authors (in the scientometric system) of all the members of the analyzed scientific team (as well as the citations by all the team members), the calculation of the external citations excludes only the references by the members of the analyzed team (cross-citations) and the citations by the publication authors not referring to the analyzed team; at the same time, the calculation of the external citations does not exclude the references to the publication provided by the external co-authors of the analyzed team (except the co-authors of the publication itself). It should be remembered that the formula presented above is meant for the evaluation of the research activity significance of a scientific team, not a single scientific worker. After modifying the number of citations to I″ for all team publications the modified h-index of a scientific team equals H′ if not less than H′ publications by the team members have the number of citations not less than H′ each.

The authors explain this calculation methodology by the need of suppressing the attempts of “improving” the scientometric indicators in an artificial way (it is possible to artificially “improve” the latter four parameters). The publication citation index formula represented above fully corresponds to the method based on the theory of limits (Petkov and Romanov 2015; Lebedeva 2015) and is aimed at struggling with the unjustified self-citations and cross-citations (a small number of self-citations, citations by co-authors and cross-citations being justified). It is known that the sum of the infinitely decreasing geometrical progression is finite (Petkov and Romanov 2015; Zopounidis 2002; Maslak 2006). That is why such methodology is acceptable for struggling with the artificial increasing of the unjustified citations. On the basis of the mathematical theory of limits it is not hard to define that the maximum possible meanings of the second, third, fourth, and fifth components in the authors’ formula (publication number of citations index) will be 4, 1.5, 0.667, and 0.25 respectively. It quite corresponds to the logics of struggling against the artificial “improvement” of publication quality: self-citations should have the least value; moreover, the logics of research activity in any scientific sphere show that from 1 to 3 self-citations are justified (logically grounded) (Kincharova and Sokolov 2015).

A more “rigid” meaning of the h-index of a scientific team is as follows: it equals H″ if not less than H″ publications of the scientific team have got not less than H″ “really external” citations each. The advantage of the “rigid” h-index (compared to the modified one) is in the simplicity of its evaluation and in considerably smaller (due to a smaller volume of information processes) amount of time for its calculation.

The question that arises here is how the weight coefficients are defined in the publication citation index formula. For the authors it is clear that for “really external” citations the weight coefficient must be the maximal possible one (i.e. 1.0), and for the other categories of citations it must equally decay from 0.8 to 0.2. The scientific and educational workers of the institutions of higher education of Krasnodar Krai (n = 768) agreed to participate in the evaluation of the weight coefficients. 77% of the respondents voted for the weight coefficient 0.8 for parameter N2, 72% of the respondents voted for 0.6 for parameter N3, while 86, 82 and 91% of the respondents voted for the other weight coefficients accordingly (69% of the respondents voted for all the weight coefficients proposed by the authors).

Yet, the significance of a scientific team for the scientific community is reflected not only in the significance of its research results but in the significance of the team members (i.e. scientific workers) themselves. In their previous articles (Popova et al. 2015a; Romanov et al. 2015) the authors introduced such indicators as the coordination number and the social valence index of a scientific worker. For the authors of this article the absence of full connection between the social valence of the team members and the team integrity is evident. For example, some of the teachers working at a university department may not have co-publications with their colleagues, but have a large number of publications in co-authorship with the members of other scientific teams.

The notions of the “coordination number” and “social valence” are as applicable for a scientific team as they are for an individual employee. The coordination number ω of a scientific team means the number of other teams it interacts with. The team valence means the volume of joint research results obtained while interacting with other teams (and can be expressed in the number of publications, joint research projects, etc.): \( \varphi = P\left( {\bigcup\nolimits_{q = 1}^{\omega } {\chi_{q} } } \right) \), where P is the capacity of the multitude, \( \chi_{q} \) is the multitude of works published by the members of the analysed scientific team in co-authorship with the members of the q-th external scientific team. The social valence index of the analysed team equals V if it has joint activity results with not less than V other teams, the volume of the results achieved with each of those also being not less than V (e.g., not less than V co-publications). In this case it does not matter whether such variety has been achieved because of the increase in number of the interacting employees or due to intensifying the interaction of certain (constantly interacting) scientific team members. Unlike the coefficient of the geographical latitude of references to publications (parameter f), the team’s social valence index does not need normalization, because the number of scientific teams collaborating with the analyzed one is always much smaller than the number of scientific teams within the country.

Results and discussion

All the given scientometric parameters (except the impact factor scientific team) should be defined for the 10-year period (from 2007 to 2016 included), the average tempo of obtaining the social recognition of research activity result being one of them. This period of time is explained by the authors in the following way. Firstly, a scientific team should be mature and characterized by stability. Secondly, in the modern world the publication citation rate should be evaluated for not less than a 5-year period; the evaluation for the period exceeding 10 years is not expedient, because science does not “stand still,” and the social recognition will be obtained by new publications. Thirdly (and most importantly), 10 years are a sufficient statistically significant interval for evaluating any kind of activity (including research), and many scientometric indicators, due to their mathematical nature (essence), are statistical parameters.

For the evaluation (calculation) of any scientific parameter listed, described and defined in the article (traditional and authors’ ones) it is possible to obtain primary (input) information from a national scientometric database (the authors’ methodology being applicable mostly for the analysis of the research activity results significance within a country). By means of inquiry to a national scientometric database (in Russia it is the Russian Science Citation Index) it is possible to obtain both a complete set of publications and a set of references (citations) to the publications by a scientific team for a definite educational institution, for its definite subdivision (a faculty or a department), and for a definite period of time (in the context of the authors’ research it is 10 years’ time). As a result of inquiry to a database it is possible to extract such information (aspects) as co-authors and authors of the citing publications, their places of work (indicating the cities and regions, as well as the scientific status, though it was not taken into account in the context of the article), etc.

The authors of this article have defined and calculated the traditional and innovative (the authors’ own) scientometric indicators of the scientific institutions of Krasnodar Krai (presented graphically in Figs. 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11). The minimum and maximum values of the same indicators for the scientific and educational mesoenvironments and microenvironments have been (presented in Table 2). The analysis of the data in Table 2 gives clear evidence for the high differentiating potential of the traditional and the authors’ indicators. The values for the traditional and each of the authors’ indicators normalized to their average value (100% at the ordinate axis) for the seven scientific institutions of Krasnodar Krai are shown in Figs. 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11 (it is evident that the number of the figures coincides with the number of the authors’ indicators).

Fig. 1
figure 1

Comparison of the authors’ indicator I′ (average h-index of the most productive team members) with the h-index and i-index for the scientific institutions of Krasnodar Krai

Fig. 2
figure 2

Comparison of the authors’ indicator R (the index of demand for the arsenal of the team’s highest quality publications) with the h-index and i-index for the scientific institutions of Krasnodar Krai

Fig. 3
figure 3

Comparison of the authors’ indicator ψ T (the tempo of obtaining the social recognition of the team’s research activity results) with the h-index and i-index for the scientific institutions of Krasnodar Krai

Fig. 4
figure 4

Comparison of the authors’ indicator f (the coefficient of the geographical latitude of references to the team’s publications) with the h-index and i-index for the scientific institutions of Krasnodar Krai

Fig. 5
figure 5

Comparison of the authors’ indicator F (the index of the geographical latitude of references to the team’s publications) with the h-index and i-index for the scientific institutions of Krasnodar Krai

Fig. 6
figure 6

Comparison of the authors’ indicator IF (the team’s 2-year impact factor, referring to the years 2014 and 2015) with the h-index and i-index for the scientific institutions of Krasnodar Krai

Fig. 7
figure 7

Comparison of the authors’ indicator ζ (the latitude of social recognition of the results of the team’s research activity) with the h-index and i-index for the scientific institutions of Krasnodar Krai

Fig. 8
figure 8

Comparison of the authors’ indicator H′ (the modified h-index of a team) with the h-index and i-index for the scientific institutions of Krasnodar Krai

Fig. 9
figure 9

Comparison of the authors’ indicator H″ (the rigid h-index of a team) with the h-index and i-index for the scientific institutions of Krasnodar Krai

Fig. 10
figure 10

Comparison of the authors’ indicator ϕ (the social valence of a team) with the h-index and i-index for the scientific institutions of Krasnodar Krai

Fig. 11
figure 11

Comparison of the authors’ indicator V (the social valence index of a team) with the h-index and i-index for the scientific institutions of Krasnodar Krai

Table 2 Edge values of the scientometric parameters for the scientific and educational mesoenvironments and microenvironments of Krasnodar Krai

Rough comparison between the authors’ indicators and h-index is shown in Table 3 with 5 different rough values: o, stands for no or only slight decrease/increase compared to h-index; −/+, stands for clearly visible decrease/increase compared to h-index; − −/+ +, stands for significant decrease/increase compared to h-index.

Table 3 Rough comparison between the authors’ indicators and h-index

As it is seen from the data in Table 3, the indicator R (the average index of demand for the arsenal of the highest quality publications) is the closest to h-index while the biggest difference from the traditional indicator (h-index) is observed in the case of the indicators f and IF (the coefficient of the geographical latitude of references and the 2-year impact factor of a scientific team respectively). Thus the analysis of the data presented in Table 3 gives the possibility of evaluating the contribution of all the indicators to the performance of the scientific institutions by summarizing the effects of each of the authors’ indicators. It is clearly seen from Table 3 that the total improvement of performance compared to the h-index performance is the highest for ARRIOC, KubSUPEST and KubSU, while the performance of NCRRIHV and KubSMU is much lower when evaluated by the authors’ indicators.

The comparison of Figs. 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 and 11 with Table 1 has shown that most of the indicators depend on the scientific team size. If the size (shown in Table 1) is taken into account, the low performance of two scientific teams could be explained by their high h-index values compared to their size and the reason for that most probably are self- and cross-references.

Analysis based on authors’ indicators results in more realistic evaluation of research performance of the studied scientific institutions in Krasnodar Krai. Among the three largest institutions (KubSAU, KubSTU and KubSU), which have about the same h-index, the best performance is shown by Kuban State University (KubSU) having the best values in 7 out of 11 indicators. Kuban State Medical University (KubSMU) is close, but clearly behind the first three institutions for many indicators but not that close as the h-index implies. Finally, among the three smallest (both in terms of size and of many scientometric indicators) institutions, NCRRIHV evidently has better performance than ARRIOC but not than KubSUPEST, although its h-index is better.

The best i-index result (evidently exceeding even the two other largest scientific institutions—KubSU and KubSTU) is demonstrated by Kuban State Agrarian University (KubSAU), which shows the results lower to those of the two other institutions of higher education according to a number of the authors’ indicators and has approximately the same h-index.This is because KubSAU has a clearly identifiable “core” of the scientific workers with high individual h-indices. The fact that Kuban State University (KubSU) and Kuban State Technological University (KubSTU), exceeded by KubSAU in the i-index, are not exceeded by it in another traditional indicator (as well as in some of the authors’ indicators) is explained by a closer interaction within a scientific team and the synergetic joint efforts of the team’s members for achieving high results (and not the “gain” due to the “elite” status of the team’s members) typical for KubSU and KubSTU. The synergism of the team members’ interaction is especially evident for KubSMU: this institution of higher education, left behind the three largest universities (KubSU, KubSTU, KubSAU) in terms of i-index, is practically equal to them in h-index as well as in a number of the authors’ indicators.

So the seven leading scientific institutions of Krasnodar Krai can be divided into three clusters (subsets). Cluster 1 shall include KubSU, KubSTU, KubSAU and KubSMU as the leading scientific institutions of Krasnodar Krai. Cluster 2 shall include a unique object—KubSUPEST, while cluster 3 shall include NCRIRHV and ARRIOCК. Kuban State University of Physical Education, Sport and Tourism (KubSUPEST) is considered a unique object characterized by its “medium location” in terms of many scientometric indicators as well as its size. The other two research institutes are included into one cluster as they have the lowest scientific indicators as well as the team size. But it is necessary to remember that no citation-based indicators reflect the practical significance of the scientific workers’ and teams’ research activity results (Jonash and Sommerlatte 1994; Kravchenko and Salygin 2015; Popova et al. 2017).

Besides, the correlation coefficients of h-index with the modified and “rigid” h-index (H′-index and H″-index respectively) have been defined for scientific microenvironments and turned out to be 0.56 and 0.48 respectively. It means that due to self-citations and cross-citations it is very hard to define the real productivity of the scientific teams’ research activity based on h-index. But the correlation coefficients between H′-index and H″-index equal 0.86, which means that the authors’ methodology for diminishing the role of self-citations and cross-citations (based on the theory of limits) makes it possible to struggle against the artificial improvement of scientometric indicators effectively.

The authors also applied varied weight coefficients to the formula for calculating the publication citation rate (except for the highest coefficient, i.e. 1.0). In total, 20 combinations of weight coefficients were studied (one of the combination examples being 0.9, 0.8, 0.7, and 0.6). The correlation coefficients for the modified h-indices (for scientific microenvironments) with the “rigid” h-index vary within the range from 0.82 to 0.89. So the modified h-index adequately reflects the scientific team’s research productivity.

At the same time, it is clearly seen from the comparison of two authors’ indicators (the modified h-index and the rigid h-index) that they do not differ too much. Therefore, it is more reasonable to use the rigid h-index for the evaluation of the real efficiency of the scientific team’s work; moreover, the volume of information processes for its evaluation (calculation) is several times smaller than for the modified h-index, which makes the evaluation easier. The analysis of the information processes for the calculation of both of these authors’ indicators showed that for a qualified user (the user with the high level of information competence) the time of calculating the rigid h-index is on the average 5.8 times smaller than for the modified h-index.

Discussing the results of this research (namely the scientometric indicators proposed by the authors), let us notice their advantages and disadvantages. The most important advantage of the majority of the authors’ indicators is the difficulty of their artificial “improvement”, diminishing the significance of the self-citation and cross-citation effects. In other words, due to applying the authors’ indicators (parameters) it is possible to evaluate the true significance of the research activity of scientific teams for the scientific community more clearly. Such authors’ indicators as the index of the geographical latitude of references to the team members’ publications and the modified h-index (with the citation rate being calculated on the basis of the theory of limits) are especially resistant to the artificial improvement. It is most reasonable to use the authors’ criteria for the analysis of the dynamics of the research activity of scientific teams (following the principle “Compare yourself with yesterday’s yourself” applied not to an individual scientific worker but to a scientific team). As many other citation-based scientometric parameters, they can be applied for the research activity (analysis) practically in any area of scientific knowledge.

However, the indicators proposed by the authors should be used with great caution for monitoring the research activity of scientific and educational environments (be they the institutions of higher education or departments and faculties). Firstly, any measurement (and the evaluation of scientometric parameters is a measurement) is characterized by errors and their factors. The possible limits of the indicators proposed by the authors are the inaccuracies in the procedures of obtaining them, the too small size of publication sets, the differences in nature and requirements of different research areas, and so on. Secondly, any citation-based parameters have a number of fundamental disadvantages: it is not always possible to define whether a reference is justified; one and the same reference list of a publication may include references of different value (major or minor); it is impossible to define the role of the publications, to which references are made, for the permanent development of the scientific knowledge (are the citing publications, in their turn, significant for the scientific community?). Thirdly, the calculation of both the traditional and the authors’ parameters is rigidly “tied” to a scientometric database (i.e. the input information for the calculation of indicators can be obtained only in a scientometric database). For example, only 798,970 scientific workers are registered in the Russian Science Citation Index while there can be many more of them (not all scientific and educational workers in Russia are registered in the national scientometric database). Nevertheless, the complex application of scientometric indicators (both the universally recognized and the authors’ ones) makes it possible to conduct the more objective multidimensional monitoring of the research activity of educational institutions (Shevchenko 2015; Mukhin and Orlov 2014).

Conclusion

The objective evaluation of the scientific teams’ research activity is one of the most actual and complicated metrological tasks in modern society (Maslak 2006; Romanov et al. 2015; Tsyganov 2013; Tolstova and Voronina 2015). The complexity of this task is conditioned both by the multidimensional character of research activity (as well as of its results) and by the complexity of social systems—scientific teams (Dudina 2015; Bayburtyan 2014; Covey 2016). The analysis and summarizing the research results have provided the following conclusions:

  1. 1.

    The model for the objective evaluation of the scientific teams’ research activity has been suggested, and it is viewed as a necessary component of monitoring the research activity in particular and the efficiency of a scientific organization in general. The model consists of eleven indicators aimed at measuring the social significance of research as well as at diminishing the effect of the artificial “improvement” of traditional indicators.

  2. 2.

    The analysis of the research activity of the scientific organizations of Krasnodar Krai has shown that the recommended set of indicators, reflecting the significance of scientific teams’ research activity results, is universal for the levels of the social system (scientific and educational micro-, meso-, or macroenvironment), being invariant for the profile of a scientific (scientific-educational) institution. In other words, the indicators proposed by the authors are applicable for the evaluation of the research activity of the scientific and educational macroenvironments of the institutions of higher education and research institutes, the mesoenvironments of faculties, and the microenvironments of departments. As well as many other citation-based indicators, the authors’ indicators are applicable for the analysis (evaluation) of the research activity of scientific teams in all the areas of scientific knowledge.

  3. 3.

    Modern information technologies make it possible to obtain the primary information about the results of the research activity performed by scientific teams. This information can be used for calculating (evaluating) all the scientometric indicators (both traditional and those introduced by the authors). The calculation (evaluation) of the authors’ indicators can be formalized and realized by means of computer software.

  4. 4.

    The actual data analysis has shown that the indicators suggested by the authors adequately reflect the significance of the scientific teams’ research activity results and possess the necessary differentiating ability; this ability appears, first of all, in the wide variety of parameters of the authors’ indicators for the scientific and educational microenvironments (i.e. the departments of the institutions of higher education). Due to the actual data analysis, the proper differentiating ability towards artificial improvement of h-index has been shown, especially by modified and „rigid“ h-index.

The perspectives of the author team’s research are as follows.

  1. 1.

    The creation of the reference model of the scientific and educational team as a subject of research activity as well as the citation-based selection and justification of the criteria for evaluating the level of the scientific team’s integration into the scientific community.

  2. 2.

    The selection and justification of the criteria for the interconnection between the research activity of scientific and educational workers on the one hand and students on the other hand, as well as the creation of the information and probabilistic models of such interconnection.

  3. 3.

    The creation of the methodology for evaluating the new citation-based criterion—the index of the geographical latitude of references to the scientific workers’ and teams’ publications; in the calculation of this indicator the territorial area embraced by the reference (citation) sources and the number of settlements will be taken into account (i.e. the analytic geometry methods will be used).