Open Access

Testing for Equivalence: A Methodology for Computational Cognitive Modelling

Journal of Artificial General Intelligence's Cover Image
Journal of Artificial General Intelligence
Cognitive Architectures, Model Comparison, and AGI, Editors: Christian Lebiere, Cleotilde Gonzalez and Walter Warwick

The equivalence test (Stewart and West, 2007; Stewart, 2007) is a statistical measure for evaluating the similarity between a model and the system being modelled. It is designed to avoid over-fitting and to generate an easily interpretable summary of the quality of a model. We apply the equivalence test to two tasks: Repeated Binary Choice (Erev et al., 2010) and Dynamic Stocks and Flows (Gonzalez and Dutt, 2007). In the first case, we find a broad range of statistically equivalent models (and win a prediction competition) while identifying particular aspects of the task that are not yet adequately captured. In the second case, we re-evaluate results from the Dynamic Stocks and Flows challenge, demonstrating how our method emphasizes the breadth of coverage of a model and how it can be used for comparing different models. We argue that the explanatory power of models hinges on numerical similarity to empirical data over a broad set of measures.

eISSN:
1946-0163
Language:
English
Publication timeframe:
2 times per year
Journal Subjects:
Computer Sciences, Artificial Intelligence