Student-generated video creation assessments are an innovative and emerging form of assessment in higher education. Academic staff may be understandably reluctant to transform assessment practices without robust evidence of the benefits and rationale for doing so and some guidance regarding how to do so successfully. A systematic approach to searching the literature was conducted to identify relevant resources, which generated key documents, authors and internet sources which were thematically analysed. This comprehensive critical synthesis of literature is presented here under the headings of findings from literature, relevance of digital capabilities, understanding the influence of local context and resources, and pedagogical considerations. Student-generated video creation for assessment is shown to have several benefits, notably in supporting development of digital and communication skills relevant to today’s world and in enhancing learning. As an emerging innovation within assessment, intentionally planning and supporting a change management process with both students and staff is required. The importance of alignment to learning outcomes, context and resources, choice of video format to desired skills development, and to relevance beyond graduation is emphasised for video creation in assessment to be used successfully. Video creation for assessment is likely to grow in popularity and it is hoped the evidence of benefits, rationale and guidance as to how to do this effectively presented here will support this transformation. Further research to consider video creation for assessment with individuals rather than collaborative group assessments, and to establish academic rigour and equivalence would be beneficial.
Vagueness is a linguistic phenomenon as well as a property of physical objects. Fuzzy set theory is a mathematical model of vagueness that has been used to define vague models of computation. The prominent model of vague computation is the fuzzy Turing machine. This conceptual computing device gives an idea of what computing under vagueness means, nevertheless, it is not the most natural model. Based on the properties of this and other models of vague computing, an attempt is made to formulate a basis for a philosophy of a theory of fuzzy computation.
The conventionality of simultaneity thesis as established by Reichenbach and Grünbaum is related to the partial freedom in the definition of simultaneity in an inertial reference frame. An apparently altogether different issue is that of the conventionality of spatial geometry, or more generally the conventionality of chronogeometry when taking also into account the conventionality of the uniformity of time. Here we will consider Einstein’s version of the conventionality of (chrono)geometry, according to which we might adopt a different spatial geometry and a particular definition of equality of successive time intervals. The choice of a particular chronogeometry would not imply any change in a theory, since its “physical part” can be changed in a way that, regarding experimental results, the theory is the same. Here, we will make the case that the conventionality of simultaneity is closely related to Einstein’s conventionality of chronogeometry, as another conventional element leading to it.
The use of idealized scientific theories in explanations of empirical facts and regularities is problematic in two ways: they don’t satisfy the condition that the explanans is true, and they may fail to entail the explanandum. An attempt to deal with the latter problem was proposed by Hempel and Popper with their notion of approximate explanation. A more systematic perspective on idealized explanations was developed with the method of idealization and concretization by the Poznan school (Nowak, Krajewski) in the 1970s. If idealizational laws are treated as counterfactual conditionals, they can be true or truthlike, and the concretizations of such laws may increase their degree of truthlikeness. By replacing Hempel’s truth requirement with the condition that an explanatory theory is truthlike one can distinguish several important types of approximate, corrective, and contrastive explanations by idealized theories. The conclusions have important consequences for the debates about scientific realism and anti-realism.
According to an influential epistemological tradition, science explains phenomena on the basis of laws, but the last two decades have witnessed a neo-mechanistic movement that emphasizes the fundamental role of mechanism-based explanations in science, which have the virtue of opening the “black box” of correlations and of providing a genuine understanding of the phenomena. Mechanisms enrich the empirical content of a theory by introducing a new set of variables, helping us to make causal inferences that are not possible on the basis of macro-level correlations (due to well-known problems regarding the underdetermination of causation by correlation). However, the appeal to mechanisms has also a methodological price. They are vulnerable to interference effects; they also face underdetermination problems, because the available evidence often allows different interpretations of the underlying structure of a correlation; they are strongly context-dependent and their individuation as causal patterns can be controversial; they present specific testability problems; finally, mechanism-based extrapolations can be misleading due to the local character of mechanisms. At any rate, the study of mechanisms is an indispensable part of the human sciences, and the problems that they raise can be controlled by quantitative and qualitative methods, and an epistemologically informed exercise of critical thinking.
Cécilia Bognon-Küss, Bohang Chen and Charles T. Wolfe
Vitalism was long viewed as the most grotesque view in biological theory: appeals to a mysterious life-force, Romantic insistence on the autonomy of life, or worse, a metaphysics of an entirely living universe. In the early twentieth century, attempts were made to present a revised, lighter version that was not weighted down by revisionary metaphysics: “organicism”. And mainstream philosophers of science criticized Driesch and Bergson’s “neovitalism” as a too-strong ontological commitment to the existence of certain entities or “forces”, over and above the system of causal relations studied by mechanistic science, rejecting the weaker form, organicism, as well. But there has been some significant scholarly “push-back” against this orthodox attitude, notably pointing to the 18th-century Montpellier vitalists to show that there are different historical forms of vitalism, including how they relate to mainstream scientific practice (Wolfe and Normandin, eds. 2013). Additionally, some trends in recent biology that run counter to genetic reductionism and the informational model of the gene present themselves as organicist (Gilbert and Sarkar 2000, Moreno and Mossio 2015). Here, we examine some cases of vitalism in the twentieth century and today, not just as a historical form but as a significant metaphysical and scientific model. We argue for vitalism’s conceptual originality without either reducing it to mainstream models of science or presenting it as an alternate model of science, by focusing on historical forms of vitalism, logical empiricist critiques thereof and the impact of synthetic biology on current (re-)theorizing of vitalism.
The assumption that natural selection alone is sufficient to explain not only which traits get fixed in a population/species, but also how they develop, has been questioned since Darwin’s times, and increasingly in the last decades. Alternative theories, linked to genetic and phenotypic processes, or to the theory of complex systems, have been proposed to explain the rise of the phenotypic variety upon which natural selection acts. In this article, we illustrate the current state of the issue and we propose a logical space based on phenotypic robustness that allows a classification of evolutionary phenomena and can provide a framework for unifying all these accounts.
In 2000, a draft note of David Hilbert was found in his Nachlass concerning a 24th problem he had consider to include in the his famous problem list of the talk at the International Congress of Mathematicians in 1900 in Paris. This problem concerns simplicity of proofs. In this paper we review the (very few) traces of this problem which one can find in the work of Hilbert and his school, as well as modern research started on it after its publication. We stress, in particular, the mathematical nature of the problem.1
This paper tries to understand the phenomenon that humans are able to empathize with robots and the intuition that there might be something wrong with “abusing” robots by discussing the question regarding the moral standing of robots. After a review of some relevant work in empirical psychology and a discussion of the ethics of empathizing with robots, a philosophical argument concerning the moral standing of robots is made that questions distant and uncritical moral reasoning about entities’ properties and that recommends first trying to understand the issue by means of philosophical and artistic work that shows how ethics is always relational and historical, and that highlights the importance of language and appearance in moral reasoning and moral psychology. It is concluded that attention to relationality and to verbal and non-verbal languages of suffering is key to understand the phenomenon under investigation, and that in robot ethics we need less certainty and more caution and patience when it comes to thinking about moral standing.