Accès libre

The Power-weakness Ratios (PWR) as a Journal Indicator: Testing the “Tournaments” Metaphor in Citation Impact Studies

À propos de cet article

Citez

Introduction

Ramanujacharyulu (1964) provided a graph-theoretical algorithm to select the winner of a tournament on the basis of the total scores of all the matches, whereby both gains and losses are taken into consideration. Prathap & Nishy (under review) proposed to use this power-weakness ratio (PWR) for citation analysis and journal ranking. PWR has been proposed for measuring journal impact with the arguments that it handles the rows and columns in the asymmetrical citation matrix symmetrically, its recursive algorithm (which it shares with other journal indicators), and its mathematical elegance. However, Ramanujacharyulu (1964) developed the algorithm for scoring tournaments (Prathap, 2014). Can journal competitions be compared to tournaments? In our opinion, journals compete in incomplete tournaments; in a round-robin tournament, all the teams are completely connected. If one team wins, the other loses. This constraint is not valid for journals.

More recently, Prathap, Nishy, and Savithri (in press) claim to have shown that “the Power-weakness Ratio becomes arguably the best quantifiable size-independent network measure of quality of any journal which is a node in a journal citation network, taking into account the full information in the network.” Does PWR indeed improve on the influence weights proposed by Pinski and Narin (1976), the Eigenfactor and Article Influence Scores (Bergstrom, 2007; West, Bergstrom, & Bergstrom, 2010), the PageRank (Brin & Page, 2001), and the Hubs-and-Authorities thesis (Kleinberg, 1999) on the Web Hypertext Induced Topic Search (HITS)? PWR shares with these algorithms the ambition to develop a size-independent metric based on recursion in the evaluation of the accumulated advantages (Price, 1976). Unlike these other measures, in PWR the disadvantages are appreciated equally with the advantages; the “power” (gains) is divided by the “weakness” (losses). In studies of sporting tournament (e.g. crickets), the ranking using PWR was found to outperform other rankings (Prathap, 2014).

In this study, we respond to this proposal in detail by testing PWR empirically in the citation matrix of 83 journals assigned to the Web-of-Science (WoS) category “Library and Information Science” (LIS) in the Journal Citation Reports 2013 of Thomson Reuters. This set is known to be heterogeneous (Leydesdorff & Bornmann, 2016; Waltman, Yan, & van Eck, 2011a): in addition to a major divide between a set of LIS journals (e.g. JASIST

Journal of the American Society for Information Science and Technology (JASIST)

) and a somewhat smaller group of journals focusing on management information systems (e.g. MIS Quart), a number of journals are not firmly related to the set, and one can further distinguish a relatively small group of bibliometrics journals within this representation of the library and information sciences (Milojević & Leydesdorff, 2013).

We focus the discussion first on the two sub-graphs of journals: (1) seven journals which cited JASIST at least 100 times during 2012, and (2) nine journals that cited MIS Quart 100 or more times. Furthermore, we study the effect of combining these two subsets into an obviously heterogeneous set of (7 + 9 =) 16 journals. The conclusion will be that the relatively homogeneous subsets converge quickly, but in the case of the heterogeneous set, PWR convergence is more slowly. At the level of the total set of 83 journals, convergence was reached, but the results were not interpretable.

In our opinion, one is not allowed to compare impact across borders between homogenous sets because citation impacts can be expected to mean something different in other systems of reference. More recently, Todeschini, Grisoni, and Nembri (2015) proposed a weighted variant of PWR (“wPWR”) for situations where the criteria can have different meanings and relevance. However, we have no instruments for weighting citations across disciplines and the borders of specialties in terms of journal sets are fuzzy and not given (Leydesdorff, 2006).

In other words, scholarly publishing can perhaps be considered in terms of tournaments, but only within specific domains. Journals do not necessarily compete in terms of citations across domains. Citation can be considered as a non-zero game: if one player wins, the other does not necessarily lose, and thus the problem is not constrained, as it is in tournaments. Since there are no precise definitions of homogeneous sets, interdisciplinary research can be at risk, while the competition is intellectually organized mainly within core set(s) (Rafols et al., 2012).

Recursive and Size-independent Algorithms for Impact Measurement

The numbers of publications and citations are size-dependent: large journals (e.g. PNAS

Proceedings of the National Academy of Sciences of the United States of America (PNAS)

, PLoS ONE) contain more publications and therefore, ceteris paribus, can be expected to contain more references and be more frequently cited. Journal impact indicators have been developed to cope with this (e.g. Moed, 2010). Garfield and Sher (1963) first introduced the journal impact factor (JIF) as a size-independent measure of journal influence. In the case of JIF, the number of citations (e.g. in year t) is divided by the number of publications (e.g. in the years t-1 and t-2). More generally, the ratio of citations over publications (C/P) is a size-independent indicator (Garfield, 1972).

Pinksy and Narin (1976; cf. Narin, 1976) proposed to improve on JIF by normalizing citations not by the number of publications, but by the aggregated number of (“citing”) references in the articles during the publication window of the citation analysis. Yanovski (1981, at p. 229) called this quotient between citations and references the “citation factor.” The citation factor was further elaborated into the “Reference Return Ratio” by Nicolaisen and Frandsen (2008). In the numerator, however, Pinski & Narin (1976) used a recursive algorithm similar to the one used for the numerator and denominator of PWR. This example of an indicator based on a recursively converging algorithm was later followed with modifications by the above-mentioned authors of PageRank, HITS, Eigenfactor, and the Scimago Journal Ranking (SJR; Guerrero-Bote & Moya-Anegón, 2012).

“Eigenfactor,” for example, can as a numerator be divided by the number of articles in the set in order to generate the so-called “article influence score” (West, Bergstrom, & Bergstrom, 2010; cf. Yan & Ding, 2010). Using Ramanujacharyulu’s (1964) PWR algorithm, however, the same recursive algorithm is applied in the cited-direction to the numerator and in the citing-direction to the denominator. “Being cited” is thus considered as contributing to “power” whereas citing is considered as “weakness” in the sense of being influenced. Let us assume that these are cultural metaphors-we return to this in the discussion-and continue first to investigate the properties of the indicator empirically. For a mathematical elaboration, the reader is referred to Todeschini, Grisoni, and Nembri (2015).

In another context, Opthof and Leydesdorff (2010) noted that indicators based on the ratio between two numbers (such as “rates of averages”) are no longer amenable to statistical analysis such as significance testing of differences among the resulting values (Gingras & Larivière, 2011). More recently, other indicators based on comparing observed with expected values have also been introduced (e.g. MNCS by Waltman et al., 2011b; I3 by Leydesdorff et al., 2012; cf. Leydesdorff et al., 2011).

The Power-weakness Ratio (PWR)

Let Z be the cited-citing journal matrix. If the entries are read row-wise, for a journal in row i, an entry such as Zij denotes the citations from journal j in the citation window (2013) to articles published in journal i during the publications window (2011–2012); in social-network analysis these are considered the in-coming links. When the matrix is read column-wise, the entry Zij signifies the references from journal j in the citation window (2013) to articles published in journal i during the publications window (2011–2012). In social-network analysis these are considered the out-going links

In social network analysis, the matrix is usually transposed so that action (“citing”) is considered as the row vector.

.

Using graph theory, Z = [Zij] is the notation of the matrix associated with the graph. This matrix can be multiplied with itself. More generally, Z can be raised indefinitely to the kth power, i.e. Zk. The Eigenfactor, for example, is a recursive iteration that raises Z to an order where convergence is obtained for what is effectively the weighted value of the total citations (Yan & Ding, 2010). One can find a value pi(k) for each journal (vector); this can be called the iterated power of order k of the journal i “to be cited”.

For obtaining weakness, the same operations are carried out column-wise by first using the transposed matrix ZT and then proceeding row-wise among these transposed elements in the same recursive and iterative manner as above. Again, for each journal one can find a value wt(k), which can be considered the iterated weakness of the order k of the journal i “to be influenced by.” The empirical question remains of whether both pi(k) and wi(k) converge for k → ∞. If k → ∞ converges, one obtains the converged power-weakness ratio ri(k) = pi(k)/wi(k).

In more formal terminology: the vector of power indexes is the solution to the equation p = Ap, where Zij is the number of times journal j cites journal i and where the matrix A is derived from the matrix Z by normalizing the columns to sum to one. The power pj of journal j is the sum over all i of the fraction of cites from journal i that go to journal j weighted by the power of journal i. Weakness is defined analogously, mutatis mutandis. As noted, a further elaboration in formal terms is provided by Todeschini, Grisoni, and Nembri (2015). The recursive procedure for formalizing the computation of pi(k) is given in graph-theoretical terms in Ramanujacharyulu (1964). An algorithmic implementation using the Stodola method of iteration is provided by Dong (1977). In the appendices, we provide routines for calculating PWR from a citation matrix using Pajek (Appendix 1) or Excel (Appendix 2).

Note that a journal is thus considered powerful when it is cited by other powerful journals and is weakened when it cites other weaker journals. This dual logic of PWR is similar to the Hubs and Authorities thesis of the Web Hypertext Induced Topic Search (HITS), a ranking method of Web pages proposed by Kleinberg (1999), but with one major difference. In the HITS paradigm as applied to a bibliometric context, good authorities would be those journals that are cited by good hubs, and good hubs the journals that cite good authorities. Thus, among other things, the elite structure of science can be discussed. Using PWR, however, good authorities are journals that are cited by good authorities and weak hubs are journals that cite weak hubs. Using CheiRank (e.g. Zhirov, Zhirov, & Shepelyansky, 2010), the two dimensions of power and weakness can also be considered as x- and y-axes in the construction of two-dimensional rankings. A review of ranking techniques using PageRank-type recursive procedures is provided by Franceschet (2011).

Data and Methods

We study the effectiveness of PWR as an indicator using journal ecosystems drawn from the LIS set of the WoS (83 journals) as an example. Two local ecosystems (sub-graphs) are isolated from this larger scientific network and the cross-citation behavior within each sub-graph is analyzed. Can the indicator be a measure of the standing of each journal in the cross-citation activity within a sub-graph that is more finely-grained than, for example, the journal impact factor or other indicators defined at the level of the total set? We will also compare our results with the Scimago Journal Ranking (SJR) because this indicator uses a recursive algorithm similar to PageRank.

One can perform the recursive matrix multiplication to the power of a matrix in a spreadsheet program such as Excel. Excel 2010 provides the function MMult() for matrix multiplications, but this function operates with a maximum of 5,460 cells (or n ≤ 73). Matrix multiplications are computationally intensive. However, the network analysis and visualization program Pajek (de Nooy, Mrvar, & Batagelj, 2011) can also be used for matrix multiplication in the case of large sets. We used Pajek to compute PWRs for the full set of 83 journals with the LIS category, and Excel for the computation in the case of the two smaller sub-graphs: (1) JASIST+ the seven journals that cite JASIST more than 100 times in 2012; and (2) MIS Quart+ the nine journals citing this journal to the same extent.

A macro (PWR.MCR) for Pajek is specified in Appendix 1 and provided at http://www.leydesdorff.net/pwr/pwr.mcr. The macro generates PWR values for k = 1 to k = 20 as vectors from a one-mode (asymmetrical) citation matrix with an equal number of rows and columns. Similarly, the Excel file for the JASIST+ set can be retrieved from http://www.leydesdorff.net/pwr/jasist.xlsx. Using the function MMult() in Excel, one can replace cell J4 with “=MMULT($B4:$H4, I$4:I$10),” etc., mutatis mutandis (available at http://www.leydesdorff.net/pwr/mmult.xlsx)

In Excel, we use the so-called Stodola method, which simplifies the computation (e.g. Dong, 1977). However, upon extension to the full set and k = 20, the results are similar to those obtained using Pajek except for rounding errors.

. The results of the various methods are similar except for rounding errors caused by how one deals with the main diagonal.

The values on the main diagonal represent within-journal self-citations. One can argue that self-citations should not be included in subsets since the number of selfcitations is global: it remains the same in the total set and in subsets, and therefore may distort subsets (Narin & Pinsky, 1976, p. 302; cf. Price, 1981, p. 62). In a second sheet of the Excel file named “without self-citations,” we show that in this case the effects are marginally different. In Appendices 1 and 2, the procedures for using Pajek or Excel, respectively, are specified in more detail.

Results
The LIS Set (83 Journals)

Among the 83 journals assigned to the journal category LIS by Thomson Reuters, one is not cited within this set and four journals do not cite any of these journals. Annual Review of Information Science and Technology, for example, is no longer published but still cited in this group; but also The Scientist is not providing references as its editorial policy. Seventy-five of the 83 journals are part of a single strong component, so they are mutually reachable directly or indirectly; the remaining eight journals include journals that are only cited by other journals, only cite other journals, or are neither cited nor citing. Note that journals that are cited but not citing obtain (very) high PWR scores because their weakness score in the denominator is minima

The weakness score in this case is determined by the number of self-citations on the main diagonal and otherwise zero.

; however, these journals do not affect the PWR scores of the other journals. Probably, one is well advised to limit the applications of PWR to strong components.

Table 1 lists ranked PWR values for 15 of the 75 journals in the central component after 20 iterations (after removing the four non-citing journals). JASIST, for example, follows with a much lower PWR value of 1.45 at the 36th position. All PWR values were stable at k = 20. However, it is difficult at this stage to say whether this ranking provides a meaningful measure of journal impact. Our results can be considered as a test of this hypothesis. In our opinion, PWR failed as an indicator of overall journal standing since we are not able to provide the results in Table 1 with an interpretation. Note that the Pajek macro can handle large network data (e.g. the complete JCR).

Fifteen journals ranked highest on PWR among 83 LIS journals.

Abbreviation of journal namePWR
Int J Comp-Supp Coll59.52
MIS Q Exec15.62
Inform Syst Res11.31
Libr Quart8.84
MIS Quart6.96
J Manage Inform Syst6.15
J Med Libr Assoc5.53
Inform Manage-Amster5.01
J Am Med Inform Assn4.40
Inform Organ-UK4.20
J Acad Libr3.57
J Inf Technol3.38
J Health Commun3.15
Inform Soc3.09
Aust Acad Res Libr2.90

Decomposition of the LIS Set

As noted above, some journals never cited another journal in this set and one journal never received any citations from the other journals in the set. For analytical reasons, PWR would be zero in the latter case and may go to infinity in the former. However, a structural analysis of the LIS set shows that there are two main subgraphs in this set. These can, for example, be visualized by using the cosine values between the citing patterns of 78 (of the 83) journals (Figure 1).

Figure 1

Two groups of journals within the WoS category LIS; cosine > 0.01; Q = 0.359; Blondel et al. (2008) and Kamada & Kawai (1989) used for the visualization.

Using the Louvain algorithm for the decomposition of this cosine-normalized matrix, 40 of these journals are assigned to partition 1 (LIS) and 38 to partition 2 (MIS—Management Information Systems; cf. Leydesdorff & Bornmann, 2016). From these two subsets, we further analysed two ecosystems which were selected because they are well-connected homogeneous sets.

Table 2 shows the two homogeneous journal ecosystems chosen for further study (using abbreviated journal names). The JASIST+ set comprises seven journals, all of which have cited JASIST at least 100 times and come from the same LIS partition. The MIS Quart+ set is similarly a set of nine journals strongly connected to one another within the MIS partition

Unlike the JASIST+ set, the MIS Quart+ set is not a completely connected clique, since the International Journal of Information Management was not cited by articles in the Journal of Information Technology during 2013.

. Finally, we shall combine the JASIST+ and MIS Quart+ sets into a set of 16 journals so that inhomogeneity is built into this arrangement.

The two homogeneous journal sub-graphs chosen for further analysis, and their abbreviated journal names.

Sub-graphAbbreviated nameSub-graphAbbreviated name
JASIST+Inform Process ManagMIS Quart+Eur J Inform Syst
J DocInform Manage-Amster
J Am Soc Inf Sci TecJ Assoc Inf Syst
J Inf SciJ Inf Technol
ScientometricsJ Manage Inform Syst
J InformetrJ Strategic Inf Syst
Inform ResMis Quart
Inform Syst Res
Int J Inform Manage

For each ecosystem, we take the year 2012 as the “citing” year and we use “total cites” to all (preceding) years as the variable on the “cited” side. Since all journals are well connected within the sub-graphs, there are no dangling nodes (where the journals are cited within the ecosystem but hardly cite any other journals in the same system). Using PWR, no damping or normalization (as is used in the PageRank approach) is proposed: one uses the cross-citation matrix without further tuning of parameters. In each case, when k = 1, one obtains the raw or non-recursive value of “impact” (Σ cited/Σ citing) and when the iteration is continued to higher orders of k as k → ∞ convergence of the recursive power-weakness ratios is found in both sets.

Table 3 shows the citation matrix Z for the JASIST+ set of seven journals. The weakness matrix can be obtained by transposing this matrix, and the cases without self-citation are obtained by discarding the entries in the diagonal and replacing them with zeroes.

Citation matrix Z for the JASIST+ set of seven journals.

CitingInform Process ManagJASISTJ Inf SciScientometricsInform ResJ DocJ Informetr
Cited
Inform Process Manag1321654986684623
JASIST120756107495189139319
J Inf Sci12668972262630
Scientometrics483203415421325552
Inform Res144329893394
J Doc2696446912810829
J Informetr2991226943302

In Table 4 we report the convergence of the size-independent power-weakness ratio r with iteration number k for the JASIST+ journals for the cases with and without self-citations. We see that this indicator can serve as a proxy for the relative qualities or specific impacts of the journals within this set. However, the main effect of the iteration is that the Journal of Documentation and JASIST change ranks after three iterations when self-citations are included. Scientometrics becomes “less powerful” than the Journal of Information Science after a single iteration.

Convergence of PWR with iteration k for the JASIST+ journals, with and without self-citations.

With self-citations
 PWR r for k =1234567
Inform Process Manag1.491.721.761.761.761.761.77
J Doc1.301.381.521.601.631.641.64
JASIST1.381.481.511.531.531.531.54
J Inf Sci0.911.191.361.431.451.461.46
Scientometrics1.000.980.980.980.980.980.97
J Informetr0.560.480.470.470.470.470.47
Inform Res0.440.370.390.400.410.410.41
Without self-citations
 PWR r for k =1234567
Inform Process Manag1.761.931.751.801.781.791.79
JASIST1.751.521.611.571.591.581.58
J Doc1.411.461.461.481.471.481.48
J Inf Sci0.881.231.231.251.251.251.25
Scientometrics0.990.990.980.990.980.990.98
J Informetr0.420.490.480.480.480.480.48
Inform Res0.320.430.410.420.420.420.42

Table 4 shows, among other things, that the inclusion of self-citations affects PWR values in this case only in the second decimal.

Figure 2 graphically displays the convergence of PWR with iteration number k for the JASIST+ set without self-citations. As noted, it may be meaningful to proceed with the case where self-citations are not included. Analogously, Figure 3 shows the convergence of PWR for the MIS Quart+ set without self-citations. Again, within this homogeneous set rapid and stable convergence of the PWR values was found.

Figure 2

Convergence of PWR with iteration number k for the seven JASIST+ journals for the case without self-citations.

Figure 3

Convergence of PWR with iteration number k for the nine MIS Quart+ journals for the case without self-citations.

But can the converged values of PWR also be considered as impact indicators of the journals? In our opinion, one can envisage three different options to interpret, for example, the results in Table 4:

Since the authors of this paper are knowledgeable in information science (or scientometrics), the ranking of LIS journals can be interpreted on the basis of our professional experience. The rank-ordering of LIS journals by PWR could not be provided by us with an interpretation. One does not expect JASIST to be ranked at the 36th position and Scientometrics at the 48th among 83 journals in the LIS category.

Another way of interpreting the results would be to compare PWR with a most similarly designed journal metric. The SCImago Journal Rank (SJR), for example, uses an algorithm similar to PageRank; for the sake of comparison the values of SJR for these seven journals are included in Table 5.

The columns for PWR and SJR correlate negatively with r = −0.26 (n.s.). This coefficient does not point to a relationship. Thus, both metrics measure different types of journal impact if they measure journal impact at all.

A third way of interpreting the results is to compare the metric with an external criterion. For example, we could ask a sample of information scientists to assess the journals. However, we did not expect other assessments to differ from our own, and therefore did not pursue this option.

Seven strongly connected journals in LIS (JASIST+) ranked on their PWR within this group. For comparison, the SJR values from 2013 are included (see http://www.journalmetrics.com/values.php).

JournalPWRSJR2013
Inform Process Manag1.790.751
JASIST1.581.745
J Doc1.480.876
J Inf Sci1.251.008
Scientometrics0.991.412
J Informetr0.482.541
Inform Res0.420.475

In sum, the indicator did not perform convincingly for journal ranking even in homogeneous sets.

An Inhomogeneous Set

Let us complete the analysis by combining the JASIST+ and MIS Quart+ sets into a single and arguably non-homogeneous set, since the one is from the LIS partition and the other from the MIS partition. Whereas journals in the former set tend to cite journals in the latter set, citations are not provided equally in the opposite direction.

Figure 4 shows the convergence of PWR for the JASIST+ subgroup of journals. Initial divergence of PWR at iteration number seven was noticed and final convergence was found for the MIS Quart+ journals only after 20 iterations in the case of a non-homogeneous set (Figure 5)

After twenty iterations, the MIS Quart+ set also converged.

. The difference between the two sets is illustrated by the two figures.

Figure 4

Convergence of PWR with iteration number k for the seven JASIST+ journals within a heterogeneous environment (without self-citations).

Figure 5

Convergence of PWR with iteration number k for the nine MIS Quart+ journals within a heterogeneous environment (without self-citations).

In other words, Ramanujacharyulu’s PWR paradigm may offer a diagnostic tool for determining whether a journal set is homogeneous or not, but it may also fail to converge or to provide meaningful results in the case of heterogeneous sets. As noted, the application of PWR may have to be limited to strong components.

Discussion and Conclusion

We investigated whether Ramanujacharyulu’s (1964) metrics for power-weakness ratios could also be used as a meaningful indicator of journal status using the aggregated citation relations among journals. As noted, PWR was considered an attractive candidate for measuring journal impact because of its symmetrical handling of the rows and columns in the asymmetrical citation matrix, its recursive algorithm (which it shares with other journal indicators), and its mathematical elegance (Prathap & Nishy, in preparation). Ramanujacharyulu (1964) developed the algorithm for scoring tournaments (Prathap, 2014). However, journals compete in incomplete tournaments; in a round-robin tournament, all the teams are completely connected. If one team wins, the other loses; but this constraint is not valid for journals.

In order to be able to appreciate the results, we experimented with a subset of the Journal Citation Reports 2013: the 83 journals assigned to the WoS category LIS. One advantage of this subset is our familiarity with these journals, so that we were able to interpret empirical results (Leydesdorff & Bornmann, 2011; 2016). Used as input into Pajek, the 83 × 83 citation matrix led to convergence, but not to interpretable results. Journals that are not represented on the “citing” dimension of the matrix—for example, because they no longer appear, but are still registered as “cited” (e.g. ARIST)—distort the PWR ranking because of zeros or very low values in the denominator. However, when the not-citing journals were excluded from the top-15 ranking, the ranking still did not match our intuition about relative journal standing.

In a further attempt to find interpretable results, we focused on two specific subsets, namely all the journals citing JASIST or MIS Quartly 100 times or more. These two relatively homogenous subsets converged easily and each provided a rank order. However, the Pearson correlation between PWR and SJR was negative (r = −0.26; n.s.) for the case of the seven LIS journals.

The PWR model should work also in the extreme cases: papers cited by the group but not citing any journal of the group, and the opposite case. However, the conceptual fail is clear in these cases: papers with a lot of citations of the group but not citing the group will be in the first position of the final ranking. In addition to the examples mentioned (Annual Review of Information Science and Technology which is no longer published and The Scientist providing no references as a policy), a journal E devoted to mathematical tools in statistics can be considered: the contents of the papers are not the classical topics in information science, and so the papers in E do not cite any paper in the group; but researchers in information science are interested in the mathematical developments, and may therefore cite papers in the journal E. This means that the journal should not be considered in a single list, since having the best rank in it is an artifact: the final ranking obtained is not meaningful.

In summary, the indicator may mathematically be elegant, but did not perform convincingly for journal ranking. This may also be due to the assumption of equal gain or loss when a citation is added on the cited or the citing side, respectively. Using PWR, journal i gains and journal j loses when a reference is added at location ij. However, as noted above, the association of “cited” with “power” and “citing” with “weakness” may be cultural. In our opinion, referencing is an actor category and can be studied in terms of behavior, whereas “citedness” is a property of a document with an expected dynamics very different from that of “citing” (Wouters, 1999).

In other words, the citation to Ramanujacharyulu (1964) is interesting and historically relevant to eigenvector centrality methods that predate Narin and Pinski (1976). However, the PWR method was conceived in 1964 as a way to evaluate round-robin tournaments, but “wins” and “losses” do not translate to citations. Citations have to be normalized because of field-specificity and the discussion of damping factors can also not be ignored since the transitivity among citations is not unlimited (Brin & Page, 1998). With this study, we have wished to show how a newly proposed indicator can be critically assessed.

eISSN:
2543-683X
Langue:
Anglais