Search Results

1 - 10 of 78 items :

  • "model comparison" x
Clear All

Editorial: Cognitive Architectures, Model Comparison and AGI

Cognitive Science and Artificial Intelligence share compatible goals of understanding and possibly generating broadly intelligent behavior. In order to determine if progress is made, it is essential to be able to evaluate the behavior of complex computational models, especially those built on general cognitive architectures, and compare it to benchmarks of intelligent behavior such as human performance. Significant methodological challenges arise, however, when trying to extend approaches used to compare model and human performance from tightly controlled laboratory tasks to complex tasks involving more open-ended behavior. This paper describes a model comparison challenge built around a dynamic control task, the Dynamic Stocks and Flows. We present and discuss distinct approaches to evaluating performance and comparing models. Lessons drawn from this challenge are discussed in light of the challenge of using cognitive architectures to achieve Artificial General Intelligence.

Exploration for Understanding in Cognitive Modeling

The cognitive modeling and artificial general intelligence research communities may reap greater scientific return on research investments - may achieve an improved understanding of architectures and models - if there is more emphasis on systematic sensitivity and necessity analyses during model development, evaluation, and comparison. We demonstrate this methodological prescription with two of the models submitted for the Dynamic Stocks and Flows (DSF) Model Comparison Challenge, exploring the complex interactions among architectural mechanisms, knowledge-level strategy variants, and task conditions. To cope with the computational demands of these analyses we use a predictive analytics approach similar to regression trees, combined with parallelization on high performance computing clusters, to enable large scale, simultaneous search and exploration.

Structural Differences and Asymmetric Shocks between the Czech Economy and the Euro Area 12

The goal of this paper is to determine whether there exist asymmetric shocks and structural differences between the Czech economy and the Euro Area 12. A New Keynesian DSGE model of a small open economy is used for this purpose. Asymmetric shocks and structural differences are examined in two ways. At first, I examine asymmetry of shocks and sources of structural differences, using model comparison based on the Bayes factor. I do not find substantial evidence in favor of heterogeneity in household preferences. I find slight differences in price and wage formation and substantial difference in interest rate smoothing. However, the main differences are in timing, persistence and volatility of structural shocks. I also investigate impact of structural differences and differences in persistence and volatility of structural shocks on the behavior of both economies, using analysis of impulse-response functions. I find no substantial differences in responses of the main variables to preference shocks. On the other hand, I find much larger volatility and persistence of domestic technology shocks. This contributes to the fact that responses of domestic variables to technology shocks are much larger, and display more gradual and hump-shaped pattern than responses of foreign variables. I also find that responses of foreign variables to labour supply shocks are much more gradual and sluggish than responses of domestic variables. As regards monetary shocks, I find that there is almost no response of foreign inflation to foreign monetary shock while response of domestic inflation to domestic monetary shock displays substantial decline followed by gradual recovery. Responses of foreign variables to cost-push shocks are larger and more volatile than responses of domestic variables.

psychology: Implications for the training of researchers. Psychological Methods . 1: 115-129. Simon, H.; and Wallach, D. 1999. Cognitive modeling in perspective. Kognitionswissenschaft . 8(1): 1-4. Stewart, T.C. 2007. A Methodology for Computational Cognitive Modelling . Ph.D. Dissertation, Institute of Cognitive Science, Carleton University, Ottawa, Ontario. Stewart, T.C.; and West, R. 2007. Equivalence: A novel basis for model comparison. In Proceedings of the Twenty-Ninth Conference of the Cognitive Science Society . Mahwah, NJ: Erlbaum. Stewart, T.C.; West, R.; and

Abstract

Creating robust, reproducible and optimal computational models is a key challenge for theorists in many sciences. Psychology and cognitive science face particular challenges as large amounts of data are collected and many models are not amenable to analytical techniques for calculating parameter sets. Particular problems are to locate the full range of acceptable model parameters for a given dataset, and to confirm the consistency of model parameters across different datasets. Resolving these problems will provide a better understanding of the behaviour of computational models, and so support the development of general and robust models. In this article, we address these problems using evolutionary algorithms to develop parameters for computational models against multiple sets of experimental data; in particular, we propose the ‘speciated non-dominated sorting genetic algorithm’ for evolving models in several theories. We discuss the problem of developing a model of categorisation using twenty-nine sets of data and models drawn from four different theories. We find that the evolutionary algorithms generate high quality models, adapted to provide a good fit to all available data.

Keep it simple - A case study of model development in the context of the Dynamic Stocks and Flows (DSF) task

This paper describes the creation of a cognitive model submitted to the ‘Dynamic Stocks and Flows’ (DSF) modeling challenge. This challenge aims at comparing computational cognitive models for human behavior during an open ended control task. Participants in the modeling competition were provided with a simulation environment and training data for benchmarking their models while the actual specification of the competition task was withheld. To meet this challenge, the cognitive model described here was designed and optimized for generalizability. Only two simple assumptions about human problem solving were used to explain the empirical findings of the training data. In-depth analysis of the data set prior to the development of the model led to the dismissal of correlations or other parametric statistics as goodness-of-fit indicators. A new statistical measurement based on rank orders and sequence matching techniques is being proposed instead. This measurement, when being applied to the human sample, also identifies clusters of subjects that use different strategies for the task. The acceptability of the fits achieved by the model is verified using permutation tests.

Abstract

Knowledge of hydrological processes and water balance elements are important for climate adaptive water management as well as for introducing mitigation measures aiming to improve surface water quality. Mathematical models have the potential to estimate changes in hydrological processes under changing climatic or land use conditions. These models, indeed, need careful calibration and testing before being applied in decision making. The aim of this study was to compare the capability of five different hydrological models to predict the runoff and the soil water balance elements of a small catchment in Norway. The models were harmonised and calibrated against the same data set. In overall, a good agreement between the measured and simulated runoff was obtained for the different models when integrating the results over a week or longer periods. Model simulations indicate that forest appears to be very important for the water balance in the catchment, and that there is a lack of information on land use specific water balance elements. We concluded that joint application of hydrological models serves as a good background for ensemble modelling of water transport processes within a catchment and can highlight the uncertainty of models forecast.

Habit Formation, Price Indexation and Wage Indexation in the DSGE Model: Specification, Estimation and Model Fit

In order to determine which specification provides better fit of the data, this paper presents several specifications of a closed economy DSGE model with nominal rigidities. The goal of this paper is to find out whether some characteristics widely used in New Keynesian DSGE models, such as habit formation in consumption, price indexation and wage indexation, provide better fit of the macroeconomic data. Model specifications are estimated on the data of the US economy and Euro Area 12 economy, using Bayesian techniques, particularly the Metropolis-Hastings algorithm (using Dynare toolbox for Matlab). The data fit measure is a Bayes factor calculated from marginal likelihoods, acquired from Bayesian estimation. Results suggest that including habit formation in consumption significantly improves the empirical data fit of the model, whereas including partial price indexation and partial wage indexation does not improve the empirical data fit of the model. Variants with full price indexation and full wage indexation were the worst ones concerning their data fit.

Probabilities of discrepancy between minima of cross-validation, Vapnik bounds and true risks

Two known approaches to complexity selection are taken under consideration: n-fold cross-validation and structural risk minimization. Obviously, in either approach, a discrepancy between the indicated optimal complexity (indicated as the minimum of a generalization error estimate or a bound) and the genuine minimum of unknown true risks is possible. In the paper, this problem is posed in a novel quantitative way. We state and prove theorems demonstrating how one can calculate pessimistic probabilities of discrepancy between these minima for given for given conditions of an experiment. The probabilities are calculated in terms of all relevant constants: the sample size, the number of cross-validation folds, the capacity of the set of approximating functions and bounds on this set. We report experiments carried out to validate the results.

Abstract

Non-spatial and spatial analyses were carried out to study the effects on genetic parameters in ten-year height growth data across two series of 10 large second-generation full-sib progeny trials of western hemlock [Tsuga heterophylla (Raf.) Sarg.] in British Columbia. To account for different and complex patterns of environmental heterogeneity, spatial single trial analyses were conducted using an individual-tree mixed model with a two-dimensional smoothing surface with tensor product of B-spline bases. The spatial single trial analysis, in all cases, showed sizeable lower Deviance Information Criterion values relative to the non-spatial analysis. Also, fitting a surface displayed a consistent reduction in the posterior mean as well as a decrease in the standard deviations of error variance, no appreciable changes in the additive variance, an increase of individual narrow-sense heritability, and accuracy of breeding values. The tensor product of cubic basis functions of B-spline based on a mixed model framework does provide a useful new alternative to model different and complex patterns of spatial variability within sites in forest genetic trials. Individual narrow-sense heritabilities estimates from the spatial single trial analyses were low (average of 0.06), but typical of this species. Estimated dominance relative to additive variances were unstable across sites (from 0.00 to 1.59). The implications of these estimations will be discussed with respect to the western hemlock genetic improvement program in British Columbia.