Browse

You are looking at 1 - 10 of 330 items for :

  • Databases and Data Mining x
Clear All
Open access

Björn Hammarfelt

Abstract

Purpose

The “Norwegian model” has become widely used for assessment and resource allocation purposes. This paper investigates why this model has becomes so widespread and influential.

Approach

A theoretical background is outlined in which the reduction of “uncertainty” is highlighted as a key feature of performance measurement systems. These theories are then drawn upon when revisiting previous studies of the Norwegian model, its use, and reactions to it, in Sweden.

Findings

The empirical examples, which concern more formal use on the level of universities as well as responses from individual researchers, shows how particular parts—especially the “publication indicator”—are employed in Swedish academia. The discussion posits that the attractiveness of the Norwegian model largely can be explained by its ability to reduce complexity and uncertainty, even in fields where traditional bibliometric measurement is less applicable.

Research limitations

The findings presented should be regarded as examples that can be used for discussion, but one should be careful to interpret these as representative for broader sentiments and trends.

Implications

The sheer popularity of the Norwegian model, leading to its application in contexts for which it was not designed, can be seen as a major challenge for the future.

Originality

This paper offers a novel perspective on the Norwegian model by focusing on its general “appeal”, rather than on its design, use or (mis)-use.

Open access

Emanuel Kulczycki and Przemysław Korytkowski

Abstract

Purpose

This study aims to present the key systemic changes in the Polish book evaluation model to focus on the publisher list, as inspired by the Norwegian Model.

Design/methodology/approach

In this study we reconstruct the framework of the 2010 and 2018 models of book evaluation in Poland within the performance-based research funding system.

Findings

For almost 20 years the book evaluation system in Poland has been based on the verification of various technical criteria (e.g. length of the book). The new 2018 model is based on the principle of prestige inheritance (a book is worth as much as its publisher is) and is inspired by the publisher list used in the Norwegian Model. In this paper, we argue that this solution may be a more balanced policy instrument than the previous 2010 model in which neither the quality of the publisher nor the quality of the book played any role in the evaluation.

Research limitations

We work from the framework of the 2018 model of book evaluation specified in the law on higher education and science from 20 July 2018, as implementation acts are not available yet.

Practical implications

This study may provide a valuable point of reference on how structural reforms in the research evaluation model were implemented on a country level. The results of this study may be interesting to policy makers, stakeholders and researchers focused on science policy.

Originality/value

This is the very first study that presents the new framework of the Polish research evaluation model and policy instruments for scholarly book evaluation. We describe what motivated policy makers to change the book evaluation model, and what arguments were explicitly raised to argue for the new solution.

Open access

Kaare Aagaard

Abstract

Purpose

The main goal of this study is to outline and analyze the Danish adoption and translation of the Norwegian Publication Indicator.

Design/methodology/approach

The study takes the form of a policy analysis mainly drawing on document analysis of policy papers, previously published studies and grey literature.

Findings

The study highlights a number of crucial factors that relate both to the Danish process and to the final Danish result underscoring that the Danish BFI model is indeed a quite different system than its Norwegian counterpart. One consequence of these process- and design differences is the fact that the broader legitimacy of the Danish BFI today appears to be quite poor. Reasons for this include: unclear and shifting objectives throughout the process; limited willingness to take ownership of the model among stakeholders; lack of communication throughout the implementation process and an apparent underestimation of the challenges associated with the use of bibliometric indicators.

Research limitation

The conclusions of the study are based on the authors’ interpretation of a long drawn and complex process with many different stakeholders involved. The format of this article does not allow for a detailed documentation of all elements, but further details can be provided upon request.

Practical implications

The analysis may feed into current policy discussions on the future of the Danish BFI.

Originality/value

Some elements of the present analysis have previously been published in Danish outlets, but this article represents the first publication on this issue targeting a broader international audience.

Open access

Gunnar Sivertsen

Abstract

The “Norwegian Model” attempts to comprehensively cover all the peer-reviewed scholarly literatures in all areas of research in one single weighted indicator. Thereby, scientific production is made comparable across departments and faculties within and between research institutions, and the indicator may serve institutional evaluation and funding. This article describes the motivation for creating the model in Norway, how it was designed, organized and implemented, as well as the effects and experiences with the model. The article ends with an overview of a new type of bibliometric studies that are based on the type of comprehensive national publication data that the Norwegian Model provides.

Open access

Liam Cleere and Lai Ma

Abstract

University College Dublin (UCD) has implemented the Output-Based Research Support Scheme (OBRSS) since 2016. Adapted from the Norwegian model, the OBRSS awards individual academic staff using a points system based on the number of publications and doctoral students. This article describes the design and implementation processes of the OBRSS, including the creation of the ranked publication list and points system and infrastructure requirements. Some results of the OBRSS will be presented, focusing on the coverage of publications reported in the OBRSS ranked publication list and Scopus, as well as information about spending patterns. Challenges such as the evaluation of the OBRSS in terms of fairness, transparency, and effectiveness will also be discussed.

Open access

Tim C. E. Engels and Raf Guns

Abstract

The BOF-key is the performance-based research funding system that is used in Flanders, Belgium. In this paper we describe the historical background of the system, its current design and organization, as well as its effects on the Flemish higher education landscape. The BOF-key in its current form relies on three bibliometric parameters: publications in Web of Science, citations in Web of Science, and publications in a comprehensive regional database for SSH publications. Taken together, the BOF-key forms a unique variant of the Norwegian model: while the system to a large extent relies on a commercial database, it avoids the problem of inadequate coverage of the SSH. Because the bibliometric parameters of the BOF-key are reused in other funding allocation schemes, their overall importance to the Flemish universities is substantial.

Open access

Janne Pölönen

Abstract

The purpose of this article is to describe the development, components and properties of a publication indicator that the Ministry of Education and Culture in Finland uses for allocating direct core funding annually to universities. Since 2013, 13% of the core funding has been allocated on basis of publication indicator that, like the Norwegian model, is based on comprehensive national level publication data that is currently provided by the VIRTA publication information service. In 2015, the publication indicator was complemented with other components of the Norwegian model, namely, quality-weighted publication counts based on national Publication Forum authority list of the publication channels with ratings established by experts in the field. The funding model allocates around 1.6 billion euros annually to universities with the publication indicator annually distributing over 200 million euros. Besides the funding model, the indicator provides comparable data for monitoring the research performance of Finnish universities, fields and subunits. The indicator may also be used in the universities’ local funding models and research management systems, sometimes even at individual level evaluation. Positive and negative effects of the indicator have been extensively discussed and speculated. Since 2011, the Finnish universities’ productivity appears to have increased in terms of both quantity and quality of publications.

Open access

Varenina Andrija, Malvić Tomislav and Mate Režić

Abstract

The Ladislavci Field (oil and gas reservoirs) is located 40 km from the city of Osijek, Croatia. The oil reservoir is in structural–stratigraphic trap and Miocene rocks of the Vukovar formation (informally named as El, F1a and F1b). The shallower gas reservoir is of Pliocene age, i.e. part of the Osijek sandstones (informally named as B). The oil reservoirs consist of limestones, breccias and conglomerates, and gas is accumulated in sandstones. Using neural networks, it was possible to interpret applicability of neural algorithm in well log analyses, and using neural model, it was possible to predict reservoir without or with small number of log data. Neural networks are trained on the data from two wells (A and B), collected from the interval starting with border of Sarmatian/Lower Pannonian (EL marker Rs7) to the well’s bottom. The inputs were data from spontaneous potential (SP) and resistivity (R16 and R64) logs. They were used for neural training and validation as well as for final prediction of lithological composition in the analysed field. The multilayer perceptron (MLP) network had been selected as the most appropriate.

Open access

Csomós György

Abstract

Purpose

Recently, a vast number of scientific publications have been produced in cities in emerging countries. It has long been observed that the publication output of Beijing has exceeded that of any other city in the world, including such leading centres of science as Boston, New York, London, Paris, and Tokyo. Researchers have suggested that, instead of focusing on cities’ total publication output, the quality of the output in terms of the number of highly cited papers should be examined. However, in the period from 2014 to 2016, Beijing produced as many highly cited papers as Boston, London, or New York. In this paper, another method is proposed to measure cities’ publishing performance by focusing on cities’ publishing efficiency (i.e., the ratio of highly cited articles to all articles produced in that city).

Design/methodology/approach

First, 554 cities are ranked based on their publishing efficiency, then some general factors influencing cities’ publishing efficiency are revealed. The general factors examined in this paper are as follows: the linguistic environment of cities, cities’ economic development level, the location of excellent organisations, cities’ international collaboration patterns, and their scientific field profile. Furthermore, the paper examines the fundamental differences between the general factors influencing the publishing efficiency of the top 100 most efficient cities and the bottom 100 least efficient cities.

Findings

Based on the research results, the conclusion can be drawn that a city’s publishing efficiency will be high if meets the following general conditions: it is in a country in the Anglosphere–Core; it is in a high-income country; it is home to top-ranked universities and/or world-renowned research institutions; researchers affiliated with that city most intensely collaborate with researchers affiliated with cities in the United States, Germany, England, France, Canada, Australia, and Italy; and the most productive scientific disciplines of highly cited articles are published in high-impact multidisciplinary journals, disciplines in health sciences (especially general internal medicine and oncology), and disciplines in natural sciences (especially physics, astronomy, and astrophysics).

Research limitations

It is always problematic to demarcate the boundaries of cities (e.g., New York City vs. Greater New York), and regarding this issue there is no consensus among researchers. The Web of Science presents the name of cities in the addresses reported by the authors of publications. In this paper cities correspond to the spatial units between the country/state level and the institution level as indicated in the Web of Science. Furthermore, it is necessary to highlight that the Web of Science is biased towards English-language journals and journals published in the field of biomedicine. These facts may influence the outcome of the research.

Practical implications

Publishing efficiency, as an indicator, shows how successful a city is at the production of science. Naturally, cities have limited opportunities to compete for components of the science establishment (e.g., universities, hospitals). However, cities can compete to attract innovation-oriented companies, high tech firms, and R&D facilities of multinational companies by for example establishing science parks. The positive effect of this process on the city’s performance in science can be observed in the example of Beijing, which publishing efficiency has been increased rapidly.

Originality/value

Previous scientometric studies have examined cities’ publication output in terms of the number of papers, or the number of highly cited papers, which are largely size dependent indicators; however this paper attempts to present a more quality-based approach.

Open access

Nees Jan van Eck and Ludo Waltman

Abstract

Purpose

To get a better understanding of the way in which university rankings are used.

Design/methodology/approach

Detailed analysis of the activities of visitors of the website of the CWTS Leiden Ranking.

Findings

Visitors of the Leiden Ranking website originate disproportionally from specific countries. They are more interested in impact indicators than in collaboration indicators, while they are about equally interested in size-dependent indicators and size-independent indicators. Many visitors do not seem to realize that they should decide themselves which criterion they consider most appropriate for ranking universities.

Research limitations

The analysis is restricted to the website of a single university ranking. Moreover, the analysis does not provide any detailed insights into the motivations of visitors of university ranking websites.

Practical implications

The Leiden Ranking website may need to be improved in order to make more clear to visitors that they should decide themselves which criterion they want to use for ranking universities.

Originality/value

This is the first analysis of the activities of visitors of a university ranking website.