Open Access

Discipline Impact Factor: Some of Its History, Some of the Author's Experience of Its Application, the Continuing Reasons for Its Use and… Next Beyond


Cite

Introduction

In 1978 the special paper presenting a new bibliometric indicator called “discipline impact factor” (DIF) was published (Hirst, 1978). A few months earlier Hirst and Talent (1977) published the results of the study that had been performed with the use of this indicator.

As Hirst (1978) stated, “the discipline impact factor (DIF) is similar to the impact factor <…>, which measures the average number of times a paper in a given journal is cited, except that the DIF measures the number of times a paper in a journal is cited in the core literature of the given discipline”. Focusing on core journals as on the source of citations did not seem to be a brilliant part of the formulation because, in fact, the specialized journals (but not the core journals) were implied to be the sources of citations in the study by Hirst and Talent (1977), and the term “discipline impact factor” itself suggested the count of citations in specialized journals. The difference between core and specialized journals is very much sufficient because in some disciplines among core journals there are journals of much wider specialization than the discipline itself (e.g. Lazarev et al., 2017); so, it is obvious that using core, but not specialized journals as a source of citations would result in involvement of items cited in the papers that are not related to the discipline in question. Therefore, the DIF ought to be defined as “similar to the impact factor <…>, which measures the average number of times a paper in a given serial publication is cited, except that the DIF measures the number of times a paper in a serial publication is cited in the journals specialized in the given discipline”.

In contrast with the classical Garfield impact factor, the DIF was used relatively rarely. Was it caused by the DIF's objective features (shortcomings), or is it a case of underestimation of the indicator? It seems obvious that an indicator that looks so similar to the famous impact factor deserves special regard. So, the present paper aims to consider the role of the DIF in evaluation of serial publications.

Methodology

In accordance with the purpose of the work, the methods were analytical interpretation of the scientific literature related to the problem and speculative explanations. The information base of the research was bibliometric publications. The papers describing the use of the discipline impact factor were searched from the the Web of Science™ Core Collection using the just the combination of words “discipline impact factor” as the retrieval prescription. Among the retrieved papers there was a number of ones dealing with use of the DIF for the selection of additional core journals (G. Hirst, indeed, paid a lot of attention to such a use), but not for the selection and evaluation of non-core, non-profile serials. Such papers were not accounted (only one paper will be mentioned as an example). Neither were accounted the papers in which DIF techniques was only mentioned. Restrictions on the length of the article being submitted to this special issue of the JDIS made me also exclude even a number of completely relevant publications. At the same time, a number of the papers by V.S. Lazarev (including some minor ones) were included as he is believed to be the first one who used the DIF technique in the (ex-)USSR.

Due to the mentioned similarity of the DIF with the classical impact factor, the analysis also included the papers by Eugene Garfield, introducing and interpreting the latter indicator. At the same time, it seemed that analytical interpretation and speculative explanations would be more successful when considering the issue in the context of the comprehension of what impact itself—as a generic concept in relation to the concept of “impact factor”—is; so some corresponding papers by Garfield were also under analysis as well the recent paper by Lazarev (2019a) analyzing the definitions and interpretations of impact in scientometric literature. Finally, the papers presenting the “discipline susceptibility factor”—the indicator derived from the DIF—were viewed.

Results
Impact

Long ago the term “impact” was introduced by Garfield (1955). Since then, the concept of “impact” has become widely regarded as one of the key concepts of scientometrics (Bornmann, 2014). For a better theoretical understanding of this notion, Lazarev (2019a) has analyzed its definitions and perceptions of “impact” that are encountered in scientometric literature. Analytical review of the scientific literature and dictionary definitions related to this problem that had been presented in the above cited paper demonstrated the following relevant interpretations of the meanings of the term “impact”: 1) a synonym of “influence” (Garfield, 1955; Waltman et al., 2013); 2) the notion that does not go beyond the concept of “strong impression” (Gove, 1993); 3) the notion that coincides with the meaning of the term “pertinence”—as it is seen from the comparison of the definition of “impact” given by Patton et al. (2016) with the definition of “pertinence” given in the terminological standard (ISO 5127:2017); 4) meaning related to purely technical indicators, not a concept. Also, (5) “impact” is treated as the consequence of a document quality (Cole & Cole, 1967), and a lot of authors just see no difference between “impact” and “quality”. However, such viewpoints are based on the wrong assumption that quality of the cited papers is predetermined by the fact that the authors make a conscious selection of cited references in accordance with some speculative standards and that their citing activity is always motivated by the desire to return the intellectual debts to the cited authors (Cole & Cole, 1967). Since this does not happen in real life (MacRoberts & MacRoberts, 1996; Nicolaisen, 2007), identification of “impact” with quality is unpromising. Therefore, it was concluded that the notion of “impact” is not sufficiently defined (Lazarev, 2019a). No unified definition of “impact” is in operation.

Nonetheless, it is conventionally agreed, that “impact”—I mean the classical impact, pointed out by Eugene Garfield, not the “social impact” or “impact on a society”—is something that is indicated by the fact of being cited and something that is associated with information use. Whatever it was, “impact” can be determined by the level of use of information reflected in bibliographic citations: “impact can be determined by utilizing information inherent in bibliographic citations” (Garfield & Malin, 1968). “Impact is primarily a measure of the use (value?) by the research community of the article in question. If an author or journal is cited significantly above the average then we say the author's work has been influential albeit sometimes controversial” (Garfield, 2003). It is exactly the use of cited information that is reflected—even documented—in the figures of citedness (Lazarev, 1996; 2019b; 2019c), though nowadays some authors oddly tend to think that “usage occurs when a user issues a request for a service pertaining to a particular scholarly resource to a particular information service” (Kurtz & Bollen, 2010).

Impact factor

In the article, which first introduced the word combination “impact factor” (Garfield, 1955), “impact factor” was still a full synonym of the word “impact”, and it did not relate to journal evaluation, as it happened much later. “Garfield <…> was to change this meaning when he created a measure he called the “impact factor” to determine which journals should be covered by the SCI” (Bensman, 2007). According to Eugene Garfield's definition, “impact factor is the mean number of citations to a journal's articles by papers subsequently published. It is determined by dividing the number of times a journal is cited (R) by the number of source articles (S) it has published” (Garfield, 1970). In other words “impact factor” is calculated “by dividing the number of times a journal has been cited by the number of articles it has published during some specific period of time. The journal impact factor will thus reflect an average citation rate per published article” (Garfield 1972). It might be noted here that, first, the publication window in cited paper is not specified in the definition and, second, that “an average citation rate per published article” refers to citation in all natural and technical journals indexed in the Science Citation Index. Correspondingly, this rate indicates the average use of the article from the cited journal by all the journals representing technical and natural sciences (being indexed by the Science Citation Index; later—by Web of Science) and, accordingly—its value for technical and natural sciences in general. The latter follows from the definitions of value itself as it was shown by Lazarev (2019b, 2019c). Accordingly, the classical Garfield impact factor reflects the value of an average paper of a certain journal for all the journals representing technical and natural sciences (being indexed by the Science Citation Index; later—by Web of Science).

Discipline impact factor

In 1972 Eugene Garfield stated that “measures of <…> impact factor should be helpful in determining the optimum makeup of both special and general collections” (Garfield, 1972). However, in order to organize a sufficient information service for special groups of researchers, it seems far more important to know the level of use of an average paper of a certain journal (or of other serial) not by all the journals representing technical and natural sciences in toto (as reflected in the classical impact factor), but by the ones specialized in that concrete disciplines or fields of research which specialists are going to receive information services. (After all, the provision of information services to specialists in a particular field of research is the task of a larger number of libraries than the information support of all natural and technical sciences “in general”.)

The “discipline impact factor” (Hirst & Talent, 1977; Hirst, 1978) which numerator is the number of times an average paper in a cited serial publication is cited in the journals specialized in the given discipline (but not all the journals representing technical and natural sciences) is the right tool for evaluation and selection serials to be used in information service of researchers specialized in a specific discipline or a research field.

As the DIF was aimed at solving practical problems that were relevant at that time for every research, university and college library, it should have been expected that it would be used very often. However, it never happened. In fact, it was used surprisingly seldom. Apparently, this was due to the fact that the calculating of the DIF required quite time-consuming computations, while the classic impact factor was presented in Science Citation Index in a ready form.

Several papers, however, ought to be mentioned as specimens of the discipline impact factor use for determining appropriate lists of serials (e.g. Black, 1983; Gould, 1981; Kushkowski et al., 1998; Lazarev & Nikolaichik, 1979; Lazarev et al., 2017 etc.). There are also a few papers in which just some minor elements of the Hirst's methodology were used relating to the selection of the restricted number of additional core journals that were not known before the research, but not to the application of the DIF for determining extensive lists of necessary serials of various specialization; e.g. the paper by Yan and Zhu (2015).

The experience of the DIF application for serials evaluation and selections that had been acquired by Lazarev and co-authors (Lazarev, 1980; Lazarev & Skalaban 2016; Lazarev et al., 2019 etc.) demonstrated that quite a substantial portion of journals which are included in the list of serials to be determined in order to organize or amend information services of the specialists in a certain discipline or research field was able to be selected exclusively by means of the DIF computation. These articles contain a number of interesting methodological features, among which the most noteworthy is the length of the publication window used for calculating the DIF.

Although Eugene Garfield used a two-year publication window when calculating the impact factor, the publication window was not specified in his definition of the impact factor. Accordingly, Hirst and Talent (1977; 1978) did not specify the duration of the publication window in their DIF definition: the two years of the publication window are listed as just an example. Therefore, using a publication window of a different duration cannot, strictly speaking, be considered a modification of the indicator. However, we note that in the works of V. S. Lazarev with co-authors of recent years, the publication window was chosen to be “5+1” years, i.e. five previous years plus the same year in which the citations were given were taken into account. “The “plus one year” choice was grounded by the wish to include the most current citations into account. The choice was made with the understanding that the number of citations to the publications of the current year cannot be representative, but this applies equally to all cited journals and other serials. And as for the preceding 5 years, according to Price (1970), citations to the preceding 5-year period over the next few years have a much greater impact on the dynamics of citing than the natural growth of literature or its normal aging, so they are of utmost importance. We believe that 5-year aggregate of citations fairly comprehensively reflects already formed (but still current) trends” (Lazarev et al., 2019).

Another interesting feature of the recent papers by Lazarev et al. devoted to DIF is the procedure of the selection of the specialized journals—the sources of citations. The selection took into account the description of the journal subject fields: first, in accordance to the ULRICHSWEB™ database, and then in accordance to the web sites or web pages of the journals themselves; the actual content of the latest available issues was also viewed. Some of the journals might not be among the most authoritative periodicals in the world, but if their thematic content is the most consistent with the subject field in question, they are accounted. The examples may be viewed in (Lazarev et al., 2018; Lazarev et al., 2019). The productivity of the specialized journals might also be taken into account (Lazrev et al., 2017).

Next beyond DIF: The discipline susceptibility factor

Counting citations to specialized journals alongside with more familiar counting ones made by specialized journals is not a new idea, and it was used for various scientometric purposes (e.g. Garfield, 1982a; 1982b; Markusova, 1973; Korennoi & Osetrov, 1981; Lazarev, 1988).

However, such papers are still much less common than the ones studying serials cited by the specialized journals; also, such papers were not aimed at the immediate improvement of information services. With this regard, it seems necessary to mention that, aiming at developing lists of the necessary serials, Lazarev and co-authors—in addition to the selection sources of information according to their citedness by specialized journals—practiced also the selection of those items that cite specialized journals. Of course, the cause-and-effect relationships between the cited and citing objects that are reflected in such citing data are different from the ones reflected in citedness data: the sources that cite specialized journals (selected in this case) are neither the most used by the specialists, nor the most valuable for them. Nevertheless, it seems reasonable to believe that citing items under analysis represent the external (non-profile) research fields fit for potential applications of the results of scientific activities obtained within the framework of this research field represented by the specialized journals (Lazarev & Skalaban, 2016; Lazarev et al., 2017 etc.). So, the acquaintance of researchers with some of such citing items is likely to help them to search for external research areas to potentially apply their findings to them.

The indicator of citations to specialized journals “symmetrical” to the “discipline impact factor” intended for selecting the necessary series, was most likely first used by Lazarev in 1980 (Lazarev, 1980). Then—much later—it has been called “discipline susceptibility factor” (Lazarev & Skalaban, 2016). It is calculated in a way slightly different that the “discipline impact factor” is calculated. Since the number of citable articles published within the publication window in specialized journals which can be cited in the citing serials under evaluation, is constant, division of the number of citations given by the serials being evaluated to this number would not change the meaning of the fractional indicator as compared with a total citing level. The use of such a fractional indicator is meaningless at all, as the citing, not cited serials are now being evaluated. Therefore, all references made within the citation window to the specialized journals by the citing items are accounted with an adjustment for the productivity of a citing item in the year of citation. The citing activity of an average article from the citing serial is evaluated. The examples of the recent use of this indicator are presented in English in the papers by Lazarev et al. (2017), Lazarev et al. (2019) etc.

Of course, due to the very nature of the information links reflected by it, the indicator called “discipline susceptibility factor” is rather an additional, auxiliary one. However, the “adventurous story” of DIF would not be complete without a chapter (or afterword) devoted to “discipline susceptibility factor”.

Discussion & conclusions
Does the DIF regard the interdisciplinarity of modern research?

Twice I have presented at the conferences some of the results promoting the DIF application for developing serials lists, and twice I was asked—quite strictly—if the DIF regards the interdisciplinarity of modern research. One of the formulations seems to be worth quoting verbatim: “In the current era interdisciplinarity is of particular importance and hence the uptake of research outside one's own field (whether for applications or otherwise) seems of particular relevance. The discipline impact factor unfortunately seems to disregard such impact across disciplines”.

In fact the exact opposite is true. As it can be seen, e.g. from the papers by Lazarev and co-authors published in English (Lazarev & Yurik, 2018; Lazarev et al., 2019), the DIF is applicable for bibliometric evaluation and selection of non-profile serials to be used by researchers in a specific discipline (so providing interdisciplinary information to specialized research) as well as for comparative evaluation of specialized periodicals. It is true that Hirst paid a lot of attention to use the DIF for adding to the list of specialized journals, but he never limited its use to it. Moreover, adding to the list of specialized journals with the use of the DIF was proposed in order to develop the further—extended—list of periodicals the number of which was measured in dozens (Hirst & Talent, 1977).

The classical impact-factor gives a comparative useful assessment of journals specialized in the same discipline. But if we compare the possibilities of applying the DIF and the classical impact factor for finding out non-profile serials useful to the specialists in a same specific discipline, we shall easily understand, that the classical impact factor is of no use in this case. It points out the most valuable journals for all technical and natural sciences, but which of these journals (I mean, which ones out of the non-profile ones) would be the most valuable for the particular discipline under consideration? The classical impact factor would not tell us this. Moreover, though everybody knows that Nature is one of the best multidisciplinary journals in the world, everybody also knows that there are disciplines, the scientific results obtained within their framework are uselessly to submit to Nature. The classic impact factor would never give a researcher such a hint. But if we calculate the DIF in a corresponding discipline for the Nature journal, we might see that if the journal is useful or its DIF is zero or very close to it. Those who do not understand the interdisciplinary nature of the DIF seem to be confused by just the name of the indicator that includes the word “discipline”.

As far as for the discipline susceptibility factor, the situation seems even more obvious as the indicator was designed especially for searching serial publications that—as it was stated above—represent the non-profile research fields fit for potential applications of the results of scientific activities obtained within the framework of this research field represented by the specialized journals.

But do we still need it for library service?

Nowadays libraries mostly buy access to huge databases (packages) and do not bother to determine the concrete necessary journals and other serials: indeed, publishers set up prices so that it is cheaper to buy the whole package than to buy separate journals.

So, as bibliometric evaluation and selection of non-profile serials to be used by researchers in a specific discipline were usually performed exactly in order to select separate serials for the specialized library stock, there seemed to be no more need now in bibliometric evaluation of the value of separate non-profile serials for researchers in a specific discipline (Lazarev, 1998).

However, the following question still arises: “Which databases (packages) ought to be purchased?” The answer might seem easy to a librarian who lives in a country where a regular sufficient financial support of university and research libraries is practiced. But in case of restricted, meager financing for database subscriptions, we are to spend our small money for sure. We need to choose the databases (subscription packages) that contain sufficient number of useful serials and are cheap to be purchased. As many as possible relevant serials ought to be accessed via these databases (packages) at the lowest financial cost. Thus, in order to arrange this, one is to check each subscription package for the presence of maximum number of necessary serials. In its turn, in order to fulfill the latter, one is to know concretely which serials are needed. And therefore, it is worth starting the same procedure that was practiced in the past for the selection serials immediately for acquisition to the library stock. (As for the Open Access journals, thought they are available, they ought to be identified and evaluated as well.) So, we, librarians from the countries that cannot afford sufficient financial support of academic, university and research libraries, still do need to determine “best” journals and to have good instruments for it. One of such efficient tools is the discipline impact factor.

Other possible applications?

Being conceived in view of library collection management, the classical impact factor now seems to be used primarily to finding the proper journal or other serial publication by (or for) the authors of scientific papers. The possible role of the DIF in such searches seems to be even more obvious as it can point out not the journals valuable for all technical and natural sciences, but the non-profile and multidisciplinary journals most valuable for the particular discipline under consideration.

As for the discipline susceptibility factor, I have already twice mentioned that this indicator was designed especially for searching serial publications that represent the non-profile research fields fit for potential applications of the results of scientific activities obtained within the framework of this research field represented by the specialized journals. It seems quite evident that such information is even more useful for the authors than for librarians.

Conclusions

So, the discipline impact factor, a biliometric indicator for evaluation of serial publications invented in 1977, though being used comparatively seldom, is still a useful indicator. It can be used both with the purpose of the academic, university or research library service advancement (especially if financial support of libraries is not sufficient) and for the authors searching for a proper journal or other serial publication to submit a scientific paper. The discipline susceptibility factor, first applied in 1980 (Lazarev, 1980), but having received its name only in 2016, is also a helpful indicator for both purposes. Some examples of their application can be read in English from the papers by Lazarev et al. (2017), Lazarev & Yurik (2018), and Lazarev et al. (2019).

eISSN:
2543-683X
Language:
English
Publication timeframe:
4 times per year
Journal Subjects:
Computer Sciences, Information Technology, Project Management, Databases and Data Mining