The Norwegian model for performance based resource allocation, first introduced in 2005, has now been adopted in a range of countries including Denmark, Finland and Flanders in Belgium (Sivertsen, 2016). In wake of its popularity researchers interested in evaluation and performance measurement have taken a further interest in how this model might affect researchers and their practices (Aagaard, Bloch, & Schneider, 2015; Aagaard, 2015; Hammarfelt & de Rijcke, 2015). On a more general level, findings point to an increase in productivity on the national level in Norway, while there are concerns regarding how the model is used in local contexts. Still, the effects of performance-based research funding system (PRFS) are still largely an unresolved issue, and one of the main problems is to disentangle effects of PRFS from surrounding factors (for an in-depth discussion see special issue of Journal of Informetrics, Waltman, 2017). In this paper, however, I will focus on a different but related question: How and why has the Norwegian model become some popular? The present paper thus attempts to explain how it can be that the Norwegian model has becomes so widespread and influential.
Obviously, the Norwegian model has been successful in getting the attention of stakeholders such as governments and university leaders, yet I suggest that its appeal among researchers is crucial for its success. Hence, the study focuses on the appeal of the model, and specifically why it is attractive for department heads and (individual) researchers. The analysis builds on already gathered data from a Swedish context in conjunction with previous studies, and while the findings are country specific it is suggested that the conclusions drawn are relevant for explaining the attractiveness of the model more generally. Importantly, this is not a study of diffusion, which follows how the model has travelled into new contexts, nor should it be read as a review of the system as such. Strengths and weaknesses of the system have been listed, see for example (Schneider, 2009; Aagaard et al., 2015), yet how these should be weighed against each other remains an open question. Generally, however, it might be stated that the main strengths of the system, such as uncomplicatedness and inclusiveness can also be seen as drawbacks depending on your perspective. Moreover, statements regarding the qualities of the Norwegian model are not easily generalizable as national or local models often involve significant adjustments and adaptations, which is evident in the Swedish case. Hence, the usefulness and appropriateness of the Norwegian model should be understood in the context of where it is applied. The perception of the model will be rather different when its used on the national level of allocating resources to institutions, while the individual researcher getting a bonus (or a pay raise) based on points in the model will have rather different perspective.
The appeal of the Norwegian model is analysed in two steps: first, a theoretical background is outlined in which the reduction of “uncertainty” is highlighted as a key feature of performance measurement systems. These theories are then drawn upon when revisiting previous studies of the Norwegian model, its use, and reactions to it, in Sweden. In a concluding part, key insights are highlighted and the implications for the future are discussed.
2 Uncertainty and assessment systems
Research is a very uncertain activity. Generally, uncertainty is a key part of any knowledge making activity and attempts to decrease the level of uncertainty may result in loss of creativity and novelty. Academic life is likewise filled with uncertainty; researchers have a high degree of freedom in deciding on what to spend their time on, yet the relative independence of many academics results in insecurities in regards to career possibilities and employment. For scholars interested in what has been labelled as the “audit society” (Power, 1997) or “evaluation society” (Dahler-Larsen, 2011) audits, assessment procedures and evaluation systems are all designed to limit risk and reduce uncertainty. Following this line of argument, it is clear that one of the main purposes of performance measurement and evaluation systems is to reduce uncertainty: for governments and taxpayers such systems serve the purpose of ensuring that resources are spent effectively, while they at the same time provide individual researchers with yardsticks and benchmarks through which they can assess their own performance in comparison to others. An effective method for decreasing the level of uncertainty is the reduction of possible choices. In limiting the number of options and outcomes assessment procedures and measures ensures a level of assurance: “The tight, yet adjustable coupling between past, present and future behaviour with a numerical indicator is intended to eliminate uncertainty” (Nowotny, 2016). The greater the reduction of possibilities, the greater is the reduction of uncertainty. This process, often called commensuration, involves “turning qualities into quantities on a shared metric” (Espeland & Sauder 2016). Commensuration is needed for any assessment system to work effectively, as it is a prerequisite for comparison, and the Norwegian system effectively preforms this task by turning publications into points. Thus, the “publication indicator” is a key feature of the system, and as we shall see it is this feature that foremost has travelled into new contexts.
Another important characteristic of performance assessment systems is to provide stability and predictableness. This means that assessment does not threaten to radically alter the balance of a system. Hasty fluctuations will have negative consequences for the confidence in a particular system, and many national systems for performance-based allocation of research funds are designed to ensure that large variances from year to year is avoided. “Predictability” suggests that if units or individuals perform according to stipulated criteria they will be rewarded as promised. Systems that are instable and unpredictable will likely foster distrust in the system.
A key quality of an assessment system is the degree to which it is deemed as fair to all those involved, which in the case of PRFS suggests that all researchers or units have the same opportunity to “score” well in the system. Finally, then an important quality of a PRFS is the degree of transparency how open and accessible are the mechanisms for resource allocation, and do the evaluated have a chance to influence the system? Evidently the reduction of uncertainty is a central feature of assessment systems, and the features listed here—stability, predictableness, transparency and fairness—are all important parts of a well-functioning system.
3 Adaptation and calibration: the Norwegian model(s) in Sweden
Unlike some neighbouring countries, Sweden has not adopted the Norwegian model as a nationwide system for allocating resources. The indicators for institutional funding are instead based on external funding and on publications and citations in Web of Science. Still, however parts of the Norwegian model, and especially the system for allocating points, is increasingly used and discussed in Swedish academia. What is important to note is that only one out of three main components in the models is actually employed by Swedish universities. The three main components are: (1) A national and comprehensive database of publications, (2) a publication indicator, and (3) a performance based funding model that reallocates resources (Sivertsen, 2016, Sivertsen, 2018). In principle Swedish universities make us of the second component, the “publication indicator”. The indicator and the list of accredited journals and publishers allows for turning publications into points that then can be weighed and compared to each other in various ways. It should be mentioned that Sweden has a CRIS system, SwePub, but this system is not yet fully developed for bibliometric analysis.
A survey conducted among bibliometricans at Swedish higher education institutions (HEIs) in 2014 found that 11 out of 27 universities used the “Norwegian model” or parts of it (Hammarfelt, Nelhans, Eklund, & Åström, 2016). The indicator is used for allocating resources either on the level of the university as a whole or in selected faculties. Often the Norwegian model is used in the social sciences and the humanities, but examples of other faculties using the indicator, such as the Faculty of Medicine at Umeå University, were also found (table 1).
The Norwegian model in Sweden (survey from 2014).
|HEI||Area of application||Adaptations|
|Gothenburg University||Social sciences, humanities, computer science, pedagogy||1. Whole counting|
|Linneaus University||Whole University||1. Points in the Norwegian system weighted against other universities, 2. Points for conference papers, 3. Points for journals in Web of Science, which are not in the Norwegian list, 4. Fractionalised|
|Luleå Technical University||Whole University||1. Both Norwegian and the ‘Danish list’.|
|2. Additional points for Web of Science indexed journals|
|Lund University (LU)||Economics||1. Fractionalised|
|Mid Sweden University||Whole University||1. Publications in Scopus and Web of Science are also given points.|
|Stockholm University||Social sciences||1. Fractionalised|
|Södertörn University||Whole University||1. Fractionalised 2. Local book series counts as level 1. 3. Conference papers receive points based on publishing house.|
|University of Halmstad||Whole university||1. Gives points to publication outside the Norwegian list. 2. Publication on the Norwegian list receives extra points.|
|University of Skövde||Whole University||1. Gives points to publication outside the Norwegian list.|
|Umeå University||Medical sciences||1. Fractionalised 2. Departments can suggest changes in the list of level 1 and 2 journals.|
|3. Doctoral theses receive 1 point.|
|Uppsala University||Social Sciences and Humanities||1. Fractionalised|
Notably, none of the eleven universities using the Norwegian model does so without modifications. Many of the universities fractionalise the counts, and this is done in the “original” model as well, yet they don’t seem to make use of the same counting method. Others give points not only for journals on the Norwegian list, but also for journals indexed in Web of Science and Scopus, and journals on the so-called Danish list (basically a Danish adaptation of the Norwegian list). Several universities work with more inclusive systems were doctoral theses (Umeå University), as well as conference papers and monographs in the local book series (Södertörn University) give points. University of Halmstad and University of Skövde give points to a wider array of publications, but their systems are clearly inspired by the Norwegian indicator. Linneaus University, which has the most intricate system, weighs points against similar departments at other Swedish universities. At Linneaus University individual researchers are rewarded, but only if their points are worth more than SEK 8,000, otherwise these points are given to their department. Moreover, the top 20% of researchers, in terms of earned points, receive an extra bonus. Generally, parts of the Norwegian system—or rather the publication indicator and the journal list—are used together with other assessment procedures and data sources. Several universities are more inclusive in terms of publications receiving points, and proceedings, dissertations, and local book series are among the channels being recognised.
Moreover, the level on which the Norwegian model is applied is important to consider. In some universities components from the Norwegian model are used to allocate funds across faculties, and at others it is employed within faculties to distribute resources to departments. At Luleå Technical University and Linneaus University researchers are individually and directly compensated for having published in level two journals. Such individual use may have particularly visible consequences as it unswervingly affects working conditions and research priorities, and it is likely that such individual assessment may have direct effects on publication practices.
Only at one institution, Umeå University, are researchers given the opportunity to influence the model by proposing journals and publishers that should be ranked. The other universities do not allow researchers to take part in the process of selecting and ranking publication outlets. This is an important difference in comparison to Norway where researchers themselves are engaged in committees deciding on the inclusion of journals, a process which also is increasingly transparent. Thus, when importing only parts of the Norwegian model, there is a risk that strengths of the system such as local authority, engagement from researchers, and the degree of transparency are reduced or lost. The possibility to influence the selection of ranked publication channels is of great importance for fields having a strong national orientation, and in which local audiences and publication channels play a considerable role. This description fits well with many fields in the humanities and social sciences (SSH), and the next section will focus on how the Norwegian model has been received among Swedish scholars in these fields.
4 The Norwegian model among Swedish scholars
The Norwegian model is well known in Sweden and, as shown above, several faculties in the social sciences and humanities make use of it when allocating funds to departments or individuals. One of the first studies to look at the how the model is used in the humanities was Hammarfelt and de Rijcke (2015), and the implementation of the model in a faculty of social sciences was studied by Edlund and Wedlin (2017). Here I will present a few insights from a recent survey of humanities scholars and social scientists in Sweden and Australia (Hammarfelt & Haddow, 2018; Haddow & Hammarfelt, to appear.). Short free text answers provided by Swedish respondents to this survey—which was about metric use and publication patterns in general—will be used in order provide a insight into how humanities scholars view and use the Norwegian model. The findings presented should be regarded as examples that can be used for discussion, but one should be careful to interpret these as representative for broader sentiments and trends. Rather these comments ought to be seen as a first indication of how the Norwegian model (or list) is used and discussed by Swedish scholars.
Several respondents mentioning the Norwegian system, or rather the Norwegian publication indicator, suggest that it influences their publication practices: “I’ve become more aware of the value attached to channels of publication and have adapted my publication practice accordingly, practically guidance given by the Norwegian list” (scholar in history and archaeology). The pressure to publish in ranked journals is sometimes coming from above, from faculty: “Thinking about classification of journal (Norwegian list) since this is very much stressed by the faculty.” (art scholar) or from funding agencies: “The Norwegian list is often required when applying for research grants based on evaluation of earlier research.” (educational researcher). More voluntary use, for example in job applications, is mentioned: “I have used the Web of Science ranking as well as the Norwegian list ranking in my applications for associate professor”. In some contexts, points in the Norwegian system supposedly even affect salaries and career advancement: “The inclusion of the Norwegian list is closely attached to wage and promotion” (economist).
The Norwegian publication indicator is used in combination with other measures, such as impact factors or h-index: “I have used Impact factor as a reason for internal funding (time to write the article); and have also used my articles published in the Norwegian list as a way of fulfilling departmental policy” (languages and literature scholar). In these remarks it become evident that Norwegian ‘points’ are part of a larger ecology of metrics and indicators: “Publication Points according to the Norwegian list and Google Scholars h-index (not accurate but I have too few ISI publications to use a ‘real’ h-index” (scholar in history and archaeology). Overall then, it appears as the Norwegian list is used as one option, among several others, which can be used to demonstrate ‘worth’. A feature of the Norwegian indicator, which is especially important for scholars in the social sciences and humanities, is that it values monographs and edited books: “I also prefer the Norwegian lists as it takes more in books” (economist).
When researchers express their views, it is often the negative consequences of assessment procedures that are in focus. However, changing publication practices, which by many is associated with the introduction of the Norwegian system, is welcomed by some researchers (Hammarfelt & Haddow, 2018). Thus, it is not uncommon that the use of the Norwegian list associated with a more general awareness regarding publication strategies: “More focus on high impact peer reviewed international journals. Less focus on chapters in books, although still publishing chapters in books at publishers with high impact/high scores in the Norwegian system” (humanities scholar, other). And a researcher in ‘computer and information science’ express similar sentiments: “Shift towards a preference to (highly) ranked journals (Impact factor, listed on the Norwegian list) and less focus on more marginal publication outlets.” In many ways the Norwegian list is seen to encourage an already ongoing transition in publications practices: “Institution policy is to reward publication in journals that are listed in Web of Science or at least on the Norwegian list, and although I mostly published in such journals before as well I probably focus on them to an even greater extent now” (economist).
For others, and perhaps especially for younger researchers, the list emphasises a development towards an international audience, which may be at odds with their own view: “I am trying to combat the shift towards international publication by also writing in Swedish, but it is hard when you are at an early stage in your career: you need those Norwegian listed journal publications” (educational researcher). In some cases the ‘battle’ is already lost, as this respondent who presents the model as ‘fait accompli’: “I have become aware of that I cannot escape the Norwegian system” (scholar in history and archaeology).
Overall, these glimpses from researchers in SSH show a rather ambivalent response to the introduction of the Norwegian publication indicator. For some it is merely emphasizing current trends within their field, while for others it comes to represent a more radical, and less welcomed, shift in publication practices. Similarly, while several respondents prefer points in the Norwegian model—not at least because it covers books—others are less approving.
As individuals and organizations we take comfort in points and numbers as they are concrete manifestations of ‘performance’, and the attractiveness of the Norwegian model can largely be attributed to its capacity to effectively reduce uncertainty. Crucial for this ability is the ‘publication indicator’ through which publications in various forms can be turned into points, which then can be transferred into recognition and resources. The indicator’s capacity to reduce uncertainty concerning the ‘value’ of publication in fields where few established bibliometric measures are applicable is especially important for understanding the appeal of the Norwegian model. In comparison with rivalling systems, such as indicators based on citation data, it also does well in terms of predictability and transparency (Ahlgren, Colliander & Persson, 2012). The possibility for researchers themselves to engage in the process of suggesting publication channels on level 1 or 2 is especially worth emphasizing, and this opportunity is likely to strengthen the trust in the system. However, when only parts of the system—in most cases the ‘publication indicator’– is imported and adapted into another national context crucial components in the model are lost. For example, if databases with complete and accurate coverage are missing, the system will work less well. Most problematic, however, is how ‘the publication indicator’ may starts to live a life own its own when being adapted and utilized in a context for which it was not designed. Such a development is evident in Sweden where local models make use of distinct parts of the Norwegian model, yet with adaptations and additions, which reflect their own needs. In many cases it would actually be more relevant to view these as ‘models inspired by the Norwegian one’ rather than direct applications. Moreover, it is evidently so that Swedish researchers, even on the individual level, now are evaluated based on points in this model, and it appears as they themselves make use of it when selecting publication channels, and when showcasing and assessing their work. Yet, important parts of the model, such as, fractionalisation of authorship and normalization between fields, are often not used when applied on the local level. Additionally, Swedish researchers, unlike their Norwegian colleagues, are not engaged in selecting journals and book publishers in the Norwegian list, and they therefore have little influence on the grading of publication outlets.
Consequently, the popularity of the Norwegian model can also be positioned as a problem; when something is popular and widely diffused it easily becomes simplified and distorted. In a Swedish context it is evident that the Norwegian model is used and adapted in ways and contexts for which it was not designed. In principle the “Norwegian publication indicator” now lives a life of its own, separated from the national system that it was designed for. Similar to other indicators like the Journal Impact Factor, the “Norwegian model” now exists as a measure among others, which is separated from its original context, and operating well beyond the control of its inventors. Moreover, it is likely that the “publication indicator” will have considerable influence in various contexts even if it is formerly abandoned in a performance-based allocation system. An example of this phenomenon, what we may call the “after-life of indicators”, is the ERA-list in Australia which was formally used only for a short period, yet this ranked list of journals still plays an important role among researchers (Hammarfelt & Haddow, 2018). Previous studies has pointed to the problem of the “Norwegian publication indicator” being used in unintended ways (Aagaard, 2015; Hammarfelt et al., 2016), and actions, such as “inter-instutional learning areas” has been implemented in order to limit inappropriate use (MLE on Performance-based Funding of Public Research Organisations 2018.) Such efforts are commendable, and yet there is a risk that Norwegian publication indicator has reached a level of popularity, and visibility, among researchers with the consequence that such actions on the institutional level will not be sufficient. Paradoxically, the “success” of the Norwegian model may very well be its greatest problem.
The author is grateful for the suggestions and encouragement provided by Ping Meng and Gunnar Sivertsen. The author would like to thank Gaby Haddow for generously permitting the use of data from our joint project on metric use in the humanities and social sciences. This study was supported by the Swedish Foundation for the Social Sciences and Humanities (Grant No. SGO14-1153:1).
Aagaard, K. (2015). How incentives trickle down: Local use of a national bibliometric indicator system. Science and Public Policy, 42(5), 725–737.
Aagaard, K., Bloch, C., & Schneider, J. W. (2015). Impacts of performance-based research funding systems: The case of the Norwegian Publication Indicator. Research Evaluation, 24(2), 106–117.
Ahlgren, P., Colliander, C., & Persson, O. (2012). Field normalized citation rates, field normalized journal impact and Norwegian weights for allocation of university research funds. Scientometrics, 92(3), 767–780.
Dahler-Larsen, P. (2011). The evaluation society. Stanford University Press.
Edlund, P., & Wedlin, L. (2017). Den kom flygande genom fönstret. Införandet av ett mätsystem för resursfördelning till forskning. In Wedlin, L. & Pallas, H. Det ostyrda universitetet: Perspektiv på styrning, autonomi och reform av svenska lärosäten, (pp. 216–243). Makadam Förlag: Göteborg
Espeland, W. N., & Sauder, M. (2016). Engines of anxiety: Academic rankings, reputation, and accountability. Russell Sage Foundation.
Haddow, G., & Hammarfelt, B. (to appear). Quality, impact and quantification: Indicators and metrics use by social scientists. Journal of the Association for Information Science and Technology.
Hammarfelt, B., & de Rijcke, S. (2015). Accountability in context: Effects of research evaluation systems on publication practices, disciplinary norms, and individual working routines in the faculty of Arts at Uppsala University. Research Evaluation, 24(1), 63–77.
Hammarfelt, B., & Haddow, G. (2018). Conflicting measures and values: How humanities scholars in Australia and Sweden use and react to bibliometric indicators. Journal of the Association for Information Science and Technology, 69(7), 924-935.
Hammarfelt, B., Nelhans, G., Eklund, P., & Åström, F. (2016). The heterogeneous landscape of bibliometric indicators: Evaluating models for allocating resources at Swedish universities. Research Evaluation, 25(3), 292–305.
MLE on Performance-based Funding of Public Research Organisations. European Commission. (2018). Retrieved August 24, 2018, from/en/policy-support-facility/mle-performance-based-funding-systems.
Power, M. (1997). The audit society: Rituals of verification. OUP Oxford.
Schneider, J. W. (2009). An Outline of the Bibliometric Indicator Used for Performance-Based Funding of Research Institutions in Norway. European Political Science, 8(3), 364–378.
Sivertsen, G. (2016). Publication-based funding: The norwegian model. In M. Ochsner, S. E. Hug, & H. D. Daniel (Eds.), Research Assessment in the Humanities (pp. 79–90). Springer International Publishing.
Sivertsen, G. (2018). The Norwegian Model in Norway. Journal of Data and Information Science, 3(4), 1–17.
Waltman, L. (2017). Special section on performance-based research funding systems. Journal of Informetrics, 11(3), 904. Retrieved from https://doi.org/10.1016/j.joi.2017.05.015