Browse

You are looking at 51 - 60 of 88,552 items

Open access

Robert A. Beeler, Teresa W. Haynes and Kyle Murphy

Abstract

Let G be a graph with vertex set V and a distribution of pebbles on the vertices of V. A pebbling move consists of removing two pebbles from a vertex and placing one pebble on a neighboring vertex, and a rubbling move consists of removing a pebble from each of two neighbors of a vertex v and placing a pebble on v. We seek an initial placement of a minimum total number of pebbles on the vertices in V, so that no vertex receives more than one pebble and for any given vertex vV, it is possible, by a sequence of pebbling and rubbling moves, to move at least one pebble to v. This minimum number of pebbles is the 1-restricted optimal rubbling number. We determine the 1-restricted optimal rubbling numbers for Cartesian products. We also present bounds on the 1-restricted optimal rubbling number.

Open access

Yuval

Studies in Jewish Music

Open access

M. Javaid, M. Abbas, Jia-Bao Liu, W. C. Teh and Jinde Cao

Abstract

A topological property or index of a network is a numeric number which characterises the whole structure of the underlying network. It is used to predict the certain changes in the bio, chemical and physical activities of the networks. The 4-layered probabilistic neural networks are more general than the 3-layered probabilistic neural networks. Javaid and Cao [Neural Comput. and Applic., DOI 10.1007/s00521-017-2972-1] and Liu et al. [Journal of Artificial Intelligence and Soft Computing Research, 8(2018), 225-266] studied the certain degree and distance based topological indices (TI’s) of the 3-layered probabilistic neural networks. In this paper, we extend this study to the 4-layered probabilistic neural networks and compute the certain degree-based TI’s. In the end, a comparison between all the computed indices is included and it is also proved that the TI’s of the 4-layered probabilistic neural networks are better being strictly greater than the 3-layered probabilistic neural networks.

Open access

Ryotaro Kamimura

Abstract

The present paper1 aims to propose a new type of information-theoretic method to maximize mutual information between inputs and outputs. The importance of mutual information in neural networks is well known, but the actual implementation of mutual information maximization has been quite difficult to undertake. In addition, mutual information has not extensively been used in neural networks, meaning that its applicability is very limited. To overcome the shortcoming of mutual information maximization, we present it here in a very simplified manner by supposing that mutual information is already maximized before learning, or at least at the beginning of learning. The method was applied to three data sets (crab data set, wholesale data set, and human resources data set) and examined in terms of generalization performance and connection weights. The results showed that by disentangling connection weights, maximizing mutual information made it possible to explicitly interpret the relations between inputs and outputs.