Browse

51 - 60 of 1,543 items :

  • Information Technology x
Clear All
SecureNN: 3-Party Secure Computation for Neural Network Training

Abstract

Neural Networks (NN) provide a powerful method for machine learning training and inference. To effectively train, it is desirable for multiple parties to combine their data – however, doing so conflicts with data privacy. In this work, we provide novel three-party secure computation protocols for various NN building blocks such as matrix multiplication, convolutions, Rectified Linear Units, Maxpool, normalization and so on. This enables us to construct three-party secure protocols for training and inference of several NN architectures such that no single party learns any information about the data. Experimentally, we implement our system over Amazon EC2 servers in different settings. Our work advances the state-of-the-art of secure computation for neural networks in three ways:

1. Scalability: We are the first work to provide neural network training on Convolutional Neural Networks (CNNs) that have an accuracy of > 99% on the MNIST dataset;

2. Performance: For secure inference, our system outperforms prior 2 and 3-server works (SecureML, MiniONN, Chameleon, Gazelle) by 6×-113× (with larger gains obtained in more complex networks). Our total execution times are 2 − 4× faster than even just the online times of these works. For secure training, compared to the only prior work (SecureML) that considered a much smaller fully connected network, our protocols are 79× and 7× faster than their 2 and 3-server protocols. In the WAN setting, these improvements are more dramatic and we obtain an improvement of 553×!

3. Security: Our protocols provide two kinds of security: full security (privacy and correctness) against one semi-honest corruption and the notion of privacy against one malicious corruption [Araki et al. CCS’16]. All prior works only provide semi-honest security and ours is the first system to provide any security against malicious adversaries for the secure computation of complex algorithms such as neural network inference and training.

Our gains come from a significant improvement in communication through the elimination of expensive garbled circuits and oblivious transfer protocols.

Open access
Setup-Free Secure Search on Encrypted Data: Faster and Post-Processing Free

Abstract

We present a novel secure search protocol on data and queries encrypted with Fully Homomorphic Encryption (FHE). Our protocol enables organizations (client) to (1) securely upload an unsorted data array x = (x[1], . . . , x[n]) to an untrusted honest-but-curious sever, where data may be uploaded over time and from multiple data-sources; and (2) securely issue repeated search queries q for retrieving the first element (i*, x[i*]) satisfying an agreed matching criterion i* = min { i ∈ [n] | IsMatch(x[i], q) = 1 }, as well as fetching the next matching elements with further interaction. For security, the client encrypts the data and queries with FHE prior to uploading, and the server processes the ciphertexts to produce the result ciphertext for the client to decrypt. Our secure search protocol improves over the prior state-of-the-art for secure search on FHE encrypted data (Akavia, Feldman, Shaul (AFS), CCS’2018) in achieving:

Post-processing free protocol where the server produces a ciphertext for the correct search outcome with overwhelming success probability. This is in contrast to returning a list of candidates for the client to postprocess, or suffering from a noticeable error probability, in AFS. Our post-processing freeness enables the server to use secure search as a sub-component in a larger computation without interaction with the client.

Faster protocol: (a) Client time and communication bandwidth are improved by a log2 n/ log log n factor. (b) Server evaluates a polynomial of degree linear in log n (compare to cubic in AFS), and overall number of multiplications improved by up to log n factor. (c) Employing only GF(2) computations (compare to GF(p) for p ≫ in AFS) to gain both further speedup and compatibility to all current FHE candidates.

Order of magnitude speedup exhibited by extensive benchmarks we executed on identical hardware for implementations of ours versus AFS’s protocols. Additionally, like other FHE based solutions, our solution is setup-free: to outsource elements from the client to the server, no additional actions are performed on x except for encrypting it element by element (each element bit by bit) and uploading the resulted ciphertexts to the server.

Open access
Snapdoc: Authenticated snapshots with history privacy in peer-to-peer collaborative editing

Abstract

Document collaboration applications, such as Google Docs or Microsoft Office Online, need to ensure that all collaborators have a consistent view of the shared document, and usually achieve this by relying on a trusted server. Other existing approaches that do not rely on a trusted third party assume that all collaborating devices are trusted. In particular, when inviting a new collaborator to a group, one needs to choose between a) keeping past edits private and sending only the latest state (a snapshot) of the document; or b) allowing the new collaborator to verify her view of the document is consistent with other honest devices by sending the full history of (signed) edits. We present a new protocol which allows an authenticated snapshot to be sent to new collaborators while both hiding the past editing history, and allowing them to verify consistency. We evaluate the costs of the protocol by emulating the editing history of 270 Wikipedia pages; 99% of insert operations were processed within 11.0 ms; 64.9 ms for delete operations. An additional benefit of authenticated snapshots is a median 84% reduction in the amount of data sent to a new collaborator compared to a basic protocol that transfers a full edit history.

Open access
StealthDB: a Scalable Encrypted Database with Full SQL Query Support

Abstract

Encrypted database systems provide a great method for protecting sensitive data in untrusted infrastructures. These systems are built using either special-purpose cryptographic algorithms that support operations over encrypted data, or by leveraging trusted computing co-processors. Strong cryptographic algorithms (e.g., public-key encryptions, garbled circuits) usually result in high performance overheads, while weaker algorithms (e.g., order-preserving encryption) result in large leakage profiles. On the other hand, some encrypted database systems (e.g., Cipherbase, TrustedDB) leverage non-standard trusted computing devices, and are designed to work around the architectural limitations of the specific devices used.

In this work we build StealthDB – an encrypted database system from Intel SGX. Our system can run on any newer generation Intel CPU. StealthDB has a very small trusted computing base, scales to large transactional workloads, requires minor DBMS changes, and provides a relatively strong security guarantees at steady state and during query execution. Our prototype on top of Postgres supports the full TPC-C benchmark with a 30% decrease in the average throughput over an unmodified version of Postgres operating on a 2GB unencrypted dataset.

Open access
Tracking Anonymized Bluetooth Devices

Abstract

Bluetooth Low Energy (BLE) devices use public (non-encrypted) advertising channels to announce their presence to other devices. To prevent tracking on these public channels, devices may use a periodically changing, randomized address instead of their permanent Media Access Control (MAC) address. In this work we show that many state-of-the-art devices which are implementing such anonymization measures are vulnerable to passive tracking that extends well beyond their address randomization cycles. We show that it is possible to extract identifying tokens from the pay-load of advertising messages for tracking purposes. We present an address-carryover algorithm which exploits the asynchronous nature of payload and address changes to achieve tracking beyond the address randomization of a device. We furthermore identify an identity-exposing attack via a device accessory that allows permanent, non-continuous tracking, as well as an iOS side-channel which allows insights into user activity. Finally, we provide countermeasures against the presented algorithm and other privacy flaws in BLE advertising.

Open access
Does a Country/Region’s Economic Status Affect Its Universities’ Presence in International Rankings?

Abstract

Purpose

Study how economic parameters affect positions in the Academic Ranking of World Universities’ top 500 published by the Shanghai Jiao Tong University Graduate School of Education in countries/regions with listed higher education institutions.

Design/methodology/approach

The methodology used capitalises on the multi-variate characteristics of the data analysed. The multi-colinearity problem posed is solved by running principal components prior to regression analysis, using both classical (OLS) and robust (Huber and Tukey) methods.

Findings

Our results revealed that countries/regions with long ranking traditions are highly competitive. Findings also showed that some countries/regions such as Germany, United Kingdom, Canada, and Italy, had a larger number of universities in the top positions than predicted by the regression model. In contrast, for Japan, a country where social and economic performance is high, the number of ARWU universities projected by the model was much larger than the actual figure. In much the same vein, countries/regions that invest heavily in education, such as Japan and Denmark, had lower than expected results.

Research limitations

Using data from only one ranking is a limitation of this study, but the methodology used could be useful to other global rankings.

Practical implications

The results provide good insights for policy makers. They indicate the existence of a relationship between research output and the number of universities per million inhabitants. Countries/regions, which have historically prioritised higher education, exhibited highest values for indicators that compose the rankings methodology; furthermore, minimum increase in welfare indicators could exhibited significant rises in the presence of their universities on the rankings.

Originality/value

This study is well defined and the result answers important questions about characteristics of countries/regions and their higher education system.

Open access
Evolution of the Socio-cognitive Structure of Knowledge Management (1986–2015): An Author Co-citation Analysis

Abstract

Purpose

The evolution of the socio-cognitive structure of the field of knowledge management (KM) during the period 1986–2015 is described.

Design/methodology/approach

Records retrieved from Web of Science were submitted to author co-citation analysis (ACA) following a longitudinal perspective as of the following time slices: 1986–1996, 1997–2006, and 2007–2015. The top 10% of most cited first authors by sub-periods were mapped in bibliometric networks in order to interpret the communities formed and their relationships.

Findings

KM is a homogeneous field as indicated by networks results. Nine classical authors are identified since they are highly co-cited in each sub-period, highlighting Ikujiro Nonaka as the most influential authors in the field. The most significant communities in KM are devoted to strategic management, KM foundations, organisational learning and behaviour, and organisational theories. Major trends in the evolution of the intellectual structure of KM evidence a technological influence in 1986–1996, a strategic influence in 1997–2006, and finally a sociological influence in 2007–2015.

Research limitations

Describing a field from a single database can offer biases in terms of output coverage. Likewise, the conference proceedings and books were not used and the analysis was only based on first authors. However, the results obtained can be very useful to understand the evolution of KM research.

Practical implications

These results might be useful for managers and academicians to understand the evolution of KM field and to (re)define research activities and organisational projects.

Originality/value

The novelty of this paper lies in considering ACA as a bibliometric technique to study KM research. In addition, our investigation has a wider time coverage than earlier articles.

Open access
A Multi-match Approach to the Author Uncertainty Problem

Abstract

Purpose

The ability to identify the scholarship of individual authors is essential for performance evaluation. A number of factors hinder this endeavor. Common and similarly spelled surnames make it difficult to isolate the scholarship of individual authors indexed on large databases. Variations in name spelling of individual scholars further complicates matters. Common family names in scientific powerhouses like China make it problematic to distinguish between authors possessing ubiquitous and/or anglicized surnames (as well as the same or similar first names). The assignment of unique author identifiers provides a major step toward resolving these difficulties. We maintain, however, that in and of themselves, author identifiers are not sufficient to fully address the author uncertainty problem. In this study we build on the author identifier approach by considering commonalities in fielded data between authors containing the same surname and first initial of their first name. We illustrate our approach using three case studies.

Design/methodology/approach

The approach we advance in this study is based on commonalities among fielded data in search results. We cast a broad initial net—i.e., a Web of Science (WOS) search for a given author’s last name, followed by a comma, followed by the first initial of his or her first name (e.g., a search for ‘John Doe’ would assume the form: ‘Doe, J’). Results for this search typically contain all of the scholarship legitimately belonging to this author in the given database (i.e., all of his or her true positives), along with a large amount of noise, or scholarship not belonging to this author (i.e., a large number of false positives). From this corpus we proceed to iteratively weed out false positives and retain true positives. Author identifiers provide a good starting point—e.g., if ‘Doe, J’ and ‘Doe, John’ share the same author identifier, this would be sufficient for us to conclude these are one and the same individual. We find email addresses similarly adequate—e.g., if two author names which share the same surname and same first initial have an email address in common, we conclude these authors are the same person. Author identifier and email address data is not always available, however. When this occurs, other fields are used to address the author uncertainty problem.

Commonalities among author data other than unique identifiers and email addresses is less conclusive for name consolidation purposes. For example, if ‘Doe, John’ and ‘Doe, J’ have an affiliation in common, do we conclude that these names belong the same person? They may or may not; affiliations have employed two or more faculty members sharing the same last and first initial. Similarly, it’s conceivable that two individuals with the same last name and first initial publish in the same journal, publish with the same co-authors, and/or cite the same references. Should we then ignore commonalities among these fields and conclude they’re too imprecise for name consolidation purposes? It is our position that such commonalities are indeed valuable for addressing the author uncertainty problem, but more so when used in combination.

Our approach makes use of automation as well as manual inspection, relying initially on author identifiers, then commonalities among fielded data other than author identifiers, and finally manual verification. To achieve name consolidation independent of author identifier matches, we have developed a procedure that is used with bibliometric software called VantagePoint (see www.thevantagepoint.com) While the application of our technique does not exclusively depend on VantagePoint, it is the software we find most efficient in this study. The script we developed to implement this procedure is designed to implement our name disambiguation procedure in a way that significantly reduces manual effort on the user’s part. Those who seek to replicate our procedure independent of VantagePoint can do so by manually following the method we outline, but we note that the manual application of our procedure takes a significant amount of time and effort, especially when working with larger datasets.

Our script begins by prompting the user for a surname and a first initial (for any author of interest). It then prompts the user to select a WOS field on which to consolidate author names. After this the user is prompted to point to the name of the authors field, and finally asked to identify a specific author name (referred to by the script as the primary author) within this field whom the user knows to be a true positive (a suggested approach is to point to an author name associated with one of the records that has the author’s ORCID iD or email address attached to it).

The script proceeds to identify and combine all author names sharing the primary author’s surname and first initial of his or her first name who share commonalities in the WOS field on which the user was prompted to consolidate author names. This typically results in significant reduction in the initial dataset size. After the procedure completes the user is usually left with a much smaller (and more manageable) dataset to manually inspect (and/or apply additional name disambiguation techniques to).

Research limitations

Match field coverage can be an issue. When field coverage is paltry dataset reduction is not as significant, which results in more manual inspection on the user’s part. Our procedure doesn’t lend itself to scholars who have had a legal family name change (after marriage, for example). Moreover, the technique we advance is (sometimes, but not always) likely to have a difficult time dealing with scholars who have changed careers or fields dramatically, as well as scholars whose work is highly interdisciplinary.

Practical implications

The procedure we advance has the ability to save a significant amount of time and effort for individuals engaged in name disambiguation research, especially when the name under consideration is a more common family name. It is more effective when match field coverage is high and a number of match fields exist.

Originality/value

Once again, the procedure we advance has the ability to save a significant amount of time and effort for individuals engaged in name disambiguation research. It combines preexisting with more recent approaches, harnessing the benefits of both.

Findings

Our study applies the name disambiguation procedure we advance to three case studies. Ideal match fields are not the same for each of our case studies. We find that match field effectiveness is in large part a function of field coverage. Comparing original dataset size, the timeframe analyzed for each case study is not the same, nor are the subject areas in which they publish. Our procedure is more effective when applied to our third case study, both in terms of list reduction and 100% retention of true positives. We attribute this to excellent match field coverage, and especially in more specific match fields, as well as having a more modest/manageable number of publications.

While machine learning is considered authoritative by many, we do not see it as practical or replicable. The procedure advanced herein is both practical, replicable and relatively user friendly. It might be categorized into a space between ORCID and machine learning. Machine learning approaches typically look for commonalities among citation data, which is not always available, structured or easy to work with. The procedure we advance is intended to be applied across numerous fields in a dataset of interest (e.g. emails, coauthors, affiliations, etc.), resulting in multiple rounds of reduction. Results indicate that effective match fields include author identifiers, emails, source titles, co-authors and ISSNs. While the script we present is not likely to result in a dataset consisting solely of true positives (at least for more common surnames), it does significantly reduce manual effort on the user’s part. Dataset reduction (after our procedure is applied) is in large part a function of (a) field availability and (b) field coverage.

Open access
Node2vec Representation for Clustering Journals and as A Possible Measure of Diversity

Abstract

Purpose

To investigate the effectiveness of using node2vec on journal citation networks to represent journals as vectors for tasks such as clustering, science mapping, and journal diversity measure.

Design/methodology/approach

Node2vec is used in a journal citation network to generate journal vector representations.

Findings

1. Journals are clustered based on the node2vec trained vectors to form a science map. 2. The norm of the vector can be seen as an indicator of the diversity of journals. 3. Using node2vec trained journal vectors to determine the Rao-Stirling diversity measure leads to a better measure of diversity than that of direct citation vectors.

Research limitations

All analyses use citation data and only focus on the journal level.

Practical implications

Node2vec trained journal vectors embed rich information about journals, can be used to form a science map and may generate better values of journal diversity measures.

Originality/value

The effectiveness of node2vec in scientometric analysis is tested. Possible indicators for journal diversity measure are presented.

Open access
Normalizing Book Citations in Google Scholar: A Hybrid Cited-side Citing-side Method

Abstract

Purpose

To design and test a method for normalizing book citations in Google Scholar.

Design/methodology/approach

A hybrid citing-side, cited-side normalization method was developed and this was tested on a sample of 285 research monographs. The results were analyzed and conclusions drawn.

Findings

The method was technically feasible but required extensive manual intervention because of the poor quality of the Google Scholar data.

Research limitations

The sample of books was limited and also all were from one discipline —business and management. Also, the method has only been tested on Google Scholar, it would be useful to test it on Web of Science or Scopus.

Practical limitations

Google Scholar is a poor source of data although it does cover a much wider range citation sources that other databases.

Originality/value

This is the first method that has been developed specifically for normalizing books which have so far not been able to be normalized.

Open access