A prevalent belief is that it is advantageous to have surname initials that are placed early in the alphabet (early surname initials) in academic fields in which authors are ordered alphabetically (alphabetic academic fields), because first authors are more visible. However, it is not certain that the advantage is strong enough to affect academic careers. In this paper, the advantage in having such early surname initials is analyzed by using data from 1,345 course catalogs that span a 100 years. We obtained academic titles and surname initials of 19,353 faculty members who appeared 211,816 times in these course catalogs. Two alphabetic academic fields – economics and mathematics – and four other academic fields that are not alphabetic were analyzed. We found that there are some years when faculty members who have early surname initials are more likely to be full professors. However, there are many other years when faculty members who have early surname initials are less likely to be full professors. We also analyzed the career path of each faculty member. Economists who have early surname initials are found to be more likely to become full professors. However, this result is not significant and does not extend to mathematicians.
This paper reports the results of an international survey on research data management (RDM) services in libraries. More than 240 practicing librarians responded to the survey and outlined their roles and levels of preparedness in providing RDM services, challenges their libraries face, and knowledge and skills that they deemed essential to advance the RDM practice. Findings of the study revealed not only a number of location and organizational differences in RDM services and tools provided but also the impact of the level of preparedness and degree of development in RDM roles on the types of RDM services provided. Respondents’ perceptions on both the current challenges and future roles of RDM services were also examined. With a majority of the respondents recognizing the importance of RDM and hoping to receive more training while expressing concerns of lack of bandwidth or capacity in this area, it is clear that, in order to grow RDM services, institutional commitment to resources and training opportunities is crucial. As an emergent profession, data librarians need to be nurtured, mentored, and further trained. The study makes a case for developing a global community of practice where data librarians work together, exchange information, help one another grow, and strive to advance RDM practice around the world.
Internationalization is important for research quality and for specialization on new themes in the social sciences and humanities (SSH). Interaction with society, however, is just as important in these areas of research for realizing the ultimate aims of knowledge creation. This article demonstrates how the heterogenous publishing patterns of the SSH may reflect and fulfill both purposes. The limited coverage of the SSH in Scopus and Web of Science is discussed along with ideas about how to achieve a more complete representation of all the languages and publication types that are actually used in the SSH. A dynamic and empirical concept of balanced multilingualism is introduced to support combined strategies for internationalization and societal interaction. The argument is that all the communication purposes in all different areas of research, and all the languages and publication types needed to fulfill these purposes, should be considered in a holistic manner without exclusions or priorities whenever research in the SSH is evaluated.
The Identifier Services (IDS) project conducted research into and built a prototype to manage distributed genomics datasets remotely and over time. Inspired by archival concepts, IDS allows researchers to track dataset evolution through multiple copies, modifications, and derivatives, independent of where data are located – both symbolically, in the research lifecycle, and physically, in a repository or storage facility. The prototype implementation is based on a three-step data modeling process involving: a) understanding and recording of different researcher workflows, b) mapping the workflows and data to a generic data model and identifying functions, and c) integrating the data model as architecture and interactive functions into cyberinfrastructure (CI). Identity functions are operationalized as continuous tracking of authenticity attributes including data location, differences between seemingly identical datasets, metadata, data integrity, and the roles of different types of local and global identifiers used during the research lifecycle. CI resources were used to conduct identity functions at scale, including scheduling content comparison tasks on high-performance computing resources. The prototype was developed and evaluated considering six data test cases, and feedback was received through a focus-group activity. While there are some technical roadblocks to overcome, our project demonstrates that identity functions are innovative solutions to manage large distributed genomic datasets.
With the increasing amount of digital journal submissions, there is a need to deploy new scalable computational methods to improve information accessibilities. One common task is to identify useful information and named entity from text documents such as journal article submission. However, there are many technical challenges to limit applicability of the general methods and lack of general tools. In this paper, we present domain informational vocabulary extraction (DIVE) project, which aims to enrich digital publications through detection of entity and key informational words and by adding additional annotations. In a first of its kind to our knowledge, our system engages authors of the peer-reviewed articles and the journal publishers by integrating DIVE implementation in the manuscript proofing and publication process. The system implements multiple strategies for biological entity detection, including using regular expression rules, ontology, and a keyword dictionary. These extracted entities are then stored in a database and made accessible through an interactive web application for curation and evaluation by authors. Through the web interface, the authors can make additional annotations and corrections to the current results. The updates can then be used to improve the entity detection in subsequent processed articles in the future. We describe our framework and deployment in details. In a pilot program, we have deployed the first phase of development as a service integrated with the journals Plant Physiology and The Plant cell published by the American Society of Plant Biologists (ASPB). We present usage statistics to date since its production on April 2018. We compare automated recognition results from DIVE with results from author curation and show the service achieved on average 80% recall and 70% precision per article. In contrast, an existing biological entity extraction tool, a biomedical named entity recognizer (ABNER), can only achieve 47% recall and return a much larger candidate set.
The emerging transdiscipline of Computational Archival Science (CAS) links frameworks such as Brown Dog and repository software such as Digital Repository At Scale To Invite Computation (DRAS-TIC) to yield an understanding of working with digital collections at scale for cultural data. The DRAS-TIC and Brown Dog projects here serve as the basis for an expandable distributed storage/service architecture with on-demand, horizontally scalable integrated digital preservation and analysis services.
With more and more users using different devices, such as personal computers, iPads, and smartphones, they can access OPAC (online public access catalog) services and other digital library services in different contexts. This leads to the phenomenon that user’s behavior can be transferred to different devices, which leads to the richness and diversity of user’s behavior data in digital libraries. A large number of user data challenge digital libraries to analyze user’s behavior, such as search preferences and borrowing habits. In this study, we study the user’s cross-device transition behavior when using OPAC. Based on the large-scale OPAC transaction log, the online activities between device transitions in the process of using OPAC are studied. In order to predict the follow-up activities that users may take, and the next device that users may use, we detect features from several perspectives and analyze the feature importance. We find that the activity and time interval on the first device are more important for predicting the user’s next activity and the next device. In addition, features of operating system help to better predict the next device. The next device used is more likely to predict the next activity after the device transition. This study examines the cross-device transition prediction in library OPAC, which can help libraries provide smart services for users when accessing OPAC on different devices.
Open science is prompting wide efforts to make data from research available for broader use. However, sharing data is complicated by important protections on the data (e.g., protections of privacy and intellectual property). The spectrum of options existing between data needing to be fully open access and data that simply cannot be shared at all is quite limited. This paper puts forth a generalized remote secure enclave as a socio-technical framework consisting of policies, human processes, and technologies that work hand in hand to enable controlled access and use of restricted data. Based on experience in implementing the enclave for computational, analytical access to a massive collection of in-copyright texts, we discuss the synergies and trade-offs that exist between software components and policy and process components in striking the right balance between safety for the data, ease of use, and efficiency.
The concept of Big Data is popular in a variety of domains. The purpose of this review was to summarize the features, applications, analysis approaches, and challenges of Big Data in health care. Big Data in health care has its own features, such as heterogeneity, incompleteness, timeliness and longevity, privacy, and ownership. These features bring a series of challenges for data storage, mining, and sharing to promote health-related research. To deal with these challenges, analysis approaches focusing on Big Data in health care need to be developed and laws and regulations for making use of Big Data in health care need to be enacted. From a patient perspective, application of Big Data analysis could bring about improved treatment and lower costs. In addition to patients, government, hospitals, and research institutions could also benefit from the Big Data in health care.