Artificial models of cognition serve different purposes, and their use determines the way they should be evaluated. There are also models that do not represent any particular biological agents, and there is controversy as to how they should be assessed. At the same time, modelers do evaluate such models as better or worse. There is also a widespread tendency to call for publicly available standards of replicability and benchmarking for such models. In this paper, I argue that proper evaluation of models does not depend on whether they target real biological agents or not; instead, the standards of evaluation depend on the use of models rather than on the reality of their targets. I discuss how models are validated depending on their use and argue that all-encompassing benchmarks for models may be well beyond reach.
In this paper, I argue that even if the Hard Problem of Content, as identified by Hutto and Myin, is important, it was already solved in naturalized semantics, and satisfactory solutions to the problem do not rely merely on the notion of information as covariance. I point out that Hutto and Myin have double standards for linguistic and mental representation, which leads to a peculiar inconsistency. Were they to apply the same standards to basic and linguistic minds, they would either have to embrace representationalism or turn to semantic nihilism, which is, as I argue, an unstable and unattractive position. Hence, I conclude, their book does not offer an alternative to representationalism. At the same time, it reminds us that representational talk in cognitive science cannot be taken for granted and that information is different from mental representation. Although this claim is not new, Hutto and Myin defend it forcefully and elegantly.
Cognitive science is an interdisciplinary conglomerate of various research fields and disciplines, which increases the risk of fragmentation of cognitive theories. However, while most previous work has focused on theoretical integration, some kinds of integration may turn out to be monstrous, or result in superficially lumped and unrelated bodies of knowledge. In this paper, I distinguish theoretical integration from theoretical unification, and propose some analyses of theoretical unification dimensions. Moreover, two research strategies that are supposed to lead to unification are analyzed in terms of the mechanistic account of explanation. Finally, I argue that theoretical unification is not an absolute requirement from the mechanistic perspective, and that strategies aiming at unification may be premature in fields where there are multiple conflicting explanatory models.
The paper proposes an empirical method to investigate linguistic prescriptions as inherent corrective behaviors. The behaviors in question may but need not necessarily be supported by any explicit knowledge of rules. It is possible to gain insight into them, for example by extracting information about corrections from revision histories of texts (or by analyzing speech corpora where users correct themselves or one another). One easily available source of such information is the revision history of Wikipedia. As is shown, the most frequent and short corrections are limited to linguistic errors such as typos (and editorial conventions adopted in Wikipedia). By perusing an automatically generated revision corpus, one gains insight into the prescriptive nature of language empirically. At the same time, the prescriptions offered are not reducible to descriptions of the most frequent linguistic use.