, and to investigate the arguments for redesigning the 2010 model, based mostly on formal criteria, and to implement the 2018 which emphasises the role of the publisher.
This study is structured accordingly: firstly, we present how scholarly book publications are evaluated in various European research evaluation models. Then we show the framework of book evaluation in the 2010 model and describe the reforms of the Polish PRFS. In the fifth section, we present the key systemic changes implemented in the new Polish PRFS and describe the framework of the publisher list
Loet Leydesdorff, Wouter de Nooy and Lutz Bornmann
Ramanujacharyulu (1964) provided a graph-theoretical algorithm to select the winner of a tournament on the basis of the total scores of all the matches, whereby both gains and losses are taken into consideration. Prathap & Nishy (under review) proposed to use this power-weakness ratio (PWR) for citation analysis and journal ranking. PWR has been proposed for measuring journal impact with the arguments that it handles the rows and columns in the asymmetrical citation matrix symmetrically, its recursive algorithm (which it shares with other
Machine learning, including graph-based, tensor-based and deep learning approaches, active learning, merging heterogeneous textual and biological data, etc.;
Applying advanced Natural Language Processing (NLP) such as sentiment analysis, argumentation analysis, or recognizing humor and sarcasm;
Use of outside information to guide discovery, such as bibliometrics, distant supervision, or cross-corpus information;
Use of implicit information to guide discovery;
Evidence synthesis and summarization techniques to guide discovery;
Identifying findings that have low
communication) and gives examples of how large datasets are being used to make discoveries in those different knowledge domains. The title is based on Jim Gray’s argument that three existing scientific paradigms (empirical, theoretical, and computational) are now being augmented by the new paradigm of data exploration. Data exploration is fundamentally dependent on capture, curation, and analysis of data streams. The curatorial function falls squarely within the domain of information science and information schools have much to contribute to data science as it develops in the
Peiling Wang, Sukjin You, Rath Manasa and Dietmar Wolfram
reviews. The authors found there was no association between review quality and signing, but concluded blinding improved the quality of reviews based on human judgments. This latter conclusion has changed over time. Proposals for OPR extend back at least to the early days of Web-based Open Access journals. Sumner and Shum (1996) proposed pre- and post-publication OPR (which they called computer-supported collaborative argumentation) for a newly created electronic OA journal, arguing that OPR was central to the journal’s operation and for opening up scholarly debate
Dangzhi Zhao, Alicia Cappello and Lucinda Johnston
there can be much argument with the premise that an author who is cited more than once in an article might have more relevance, and/or importance than an author who is cited only once in an article” (pp. 20–21). Herlach (1978) found that multi-citations are about 30% more topically relevant to the citing paper than uni-citations. Bonzi (1982) confirmed results from Herlach (1978) and Voos and Dagaev (1976) that multi-citations can be used as a good predictor of importance or relevance to the citing paper. Tang and Safer (2008) found that giving high
argument. Dr. Seneff provides a plausible mechanism(s).
Cell phone subscriptions have also risen dramatically since the mid-1990s, and, in a 2009 study, correlated quite well with the increase in autism ( BA, 2009 ). A number of researchers have provided plausible mechanisms, one of the more compelling recent ones being Dr. Martin Pall ( Pall, 2015 ). He also believes there could be synergy between EMFs and toxic chemical stimuli .
There may be other contributing factors that would have some degree of correlation with the rapid increase of autism we have seen over
such arguments in one or the other direction.
There seems to be a broad consensus, however, that the research performed at the Flemish universities is of high quality, and that this is partially thanks to a close monitoring of research activity and impact. Indeed, the datasets that have been set up in view of the bibliometric parameters of the BOF-key have become references for many other processes at the level of institutions and the government. Still, the conceptual definition and delimitation of research publications remains challenging given the constant
Xiaoling Liu, Mihai Păunescu, Viorel Proteasa and Jinshan Wu
of smaller variation and thus renders the comparison between populations more reliable. In this case, theoretical arguments for choosing a specific segment as being representative of the entire population are strongly required. We pursued this stream of research in Proteasa et al. (2017).
In this article, we apply the former analytical approach in an empirical context: we calculate the minimum representative size for six medicine departments in Romania in order to allow for reliable comparisons between them as collective units.
Let us denote a
as the “audit society” ( Power, 1997 ) or “evaluation society” ( Dahler-Larsen, 2011 ) audits, assessment procedures and evaluation systems are all designed to limit risk and reduce uncertainty. Following this line of argument, it is clear that one of the main purposes of performance measurement and evaluation systems is to reduce uncertainty : for governments and taxpayers such systems serve the purpose of ensuring that resources are spent effectively, while they at the same time provide individual researchers with yardsticks and benchmarks through which they can