Before proceeding with a few comments regarding directions that the
What
While the situation I will discuss is largely global in its operations, and while it will be approached in predictable ways by the dominant national “schools”, I very much look forward to what will emerge from the work within the Nordic tradition. As has already happened with television studies, that work will be generative for those of us inhabiting other thought systems and national academies. To be clear, rather than a place or set of regional dynamics, I wish in this article to invoke the Nordic as a set of analytic perspectives.
With this Nordic “stance” in mind, I’ll turn to a few developments I think will increasingly demand attention in the coming years – but not before mentioning a second proviso: Media specialists in the early twenty-first century – like the larger human species – face an extraordinary number of interrelated challenges in which media will play a central role. These include climate change; economic and political polarisation; globalised ownership, labour, peoples, and cultures; and the decline of long-established governance structures (religion, civic society, and the nation-state). Rather than taking on these deserving topics, I’d like to focus my comments on what I see as a fundamental and even epistemic challenge to media and the social relationships bound up in media enactments and explore a few of the ensuing implications, including a reconfiguration of the subject's position, the recursive production of audiences and texts, the changing condition of narratives, and in short, what might be termed an epistemic crisis in media. This new situation, I will argue, recasts the knowledge accumulated over the years in media studies, and marks the space where the Nordic approach to the field can make an important intervention. This is admittedly the tip of the iceberg, but one has to start somewhere!
The history of modern media in the commonplace sense might be located in Mainz, with Guttenberg's mid-fifteenth century printing press. I usually hew to Raymond Williams’ notion of a medium as a technology and cultural form, or Lisa Gitelman's formulation, as a technological platform and social protocol expressed in a particular cultural-historical configuration, but for the purpose of this argument I take a smaller view. Media obviously precede the modern in the sense that I am using it, and certainly have a robust history outside the West, the domain of the “commonplace” to which I am referring.
As I’ve argued in various essays over the past few years, this is now changing (e.g., Uricchio, 2011). To say it boldly, the contours of an epistemic shift, the likes of which Western culture has not experienced for five and a half centuries, are emerging. This is a big claim, of course, and I make it fully aware of the recurrence of apocalyptic cries heralding the appearance of each new media form: the printing press, photography, telegraphy, telephony, film, and the broadcasting media each provoked fears of radical change. But claims or not, all shared a pattern of deployment that essentially served to amplify the producing subject. All worked to maintain the coherence of the binary relationship that Heidegger saw as defining the (long) modern.
For the first time in a half millennium, something different is happening. The reassuring binary so fundamental to the modern era's representation systems is steadily giving way to a new epistemological order that is recursive in nature, that actively parses the subject and shapes access to the world in ways that are neither evident nor perhaps even knowable. Now as with the dawn of the modern era, the shift is bound up in deployments of technology – in today's case, the algorithm, which I mean in the broad sense to include the constellation of data, modelling, training sets, and the rest of the operations that undergird much of so-called new media. (I use this term as a synecdoche, echoing analysis of the term's use by Tarleton Gillespie, 2016.) Whether curating search results, book recommendations, or music selections; enabling social media platforms and shaping news feeds; recognising faces, behaviours, and potentially acting upon those determinations; constructing immersive and responsive worlds in virtual reality; and more, an algorithmic regime not only stands between the subject and the world apprehended, but actively shapes access to the world in recursive conversation with the subject and whatever biases the system carries by design. This is a radical new condition, and ruptures precisely the subject-object relationship that defines the modern and is embedded in five hundred plus years of media use. To be clear, what's “radically new” about today's social media networks, for example, is not the network, which has precedents in telegraphy and telephony; nor is it the blurring of the familiar binary of producer and consumer, which has occurred historically across media forms in various co-creation configurations (for a closer look at co-creative practices, see Cizek & Uricchio, 2019). What is new and disruptive of the modern's defining subject-object relationship is the recursive character of these new media forms. Rather than simply extending, connecting, or multiplying the textual utterances of the subject, this emergent class of media is responsive, learning the subject's preferences and shaping the textual world to which the subject has access according to the system's optimisation algorithms. Users don’t simply “see” one another's feeds on social media; rather, they see a constantly evolving curation of those feeds that reflects an assessment of what will best serve the system's operating imperatives together with an ongoing assessment of the subject's responses.
If the “radically new” in this formulation is the repositioning of agency in the guise of the recursive and “personalised”, why do media scholars and the press focus on networks, the blurring of the production-consumption binary, and the organisational logics behind Facebook and Google? I think it reflects the role of precedent; these are well-trodden categories from earlier media experience. They have a familiar salience, but habituation encourages a cultural stance in which the new can easily be retrofitted into the familiar contours of the old, enabling radical potentials to be occluded. The relations implicit in the modern (and again, Heidegger is as eloquent as efficient in mapping them) have acquired a taken-forgrantedness that infuses knowledge and informs ways of navigating the new, even at the risk of missing what is truly radical.
This broad shift from the old certainties to a new recursive order gives me great sympathy for those who witnessed the social and epistemological upheaval associated with the printing press and three-point perspective in the fifteenth century, the previous “disruption” and the one that emblematises the notion of the modern that I have been invoking and that is now at risk. Adrian Johns's wonderful work on the history of the book shows how destabilising the experience was, and how ill-prepared the cultural frame for those “backing into” the Early Modern from the Medieval Era (Johns, 1998).
What does this mean for the future of media and their embeddedness in daily life? And what does it mean for the field, from teaching and research agendas to the
These fast-changing and recursive relationships raise profound questions about human agency, autonomy, and privacy – will these developments enhance, diminish, or otherwise modify the subject's responsibilities and range of action? Their crafting of an individuated and responsive world challenges existing social defaults such as inherited value systems and hierarchies of authority. Will we see continuity or breaks in the social order and notions of justice, human rights, privacy, and authenticity – and with what implications? They pose questions about human singularity: how entangled is the notion of subjectivity with technology, and what is the nature of the subject's relationship with these algorithmic systems? And – perhaps most importantly – what new language will enable these questions to be asked and who will be empowered to interrogate and critically assess the new relationships enabled by this emergent order and its mediations of the world? The last point is perhaps the most important if this new condition is to be assessed critically, rather than being forced back into the categories and logics of the past.
Continuities mask ruptures, perhaps inevitably. Although I am arguing for the need to consider a new situation, I am also aware that inherited linguistic and theoretical frames necessarily impinge upon all that is radical in that newness. It is perhaps a good moment to return to the likes of Thomas Kuhn and others who have analysed the dynamics of paradigm shifts, and charted historical responses.
Over the years, media studies broadly-writ have cycled through theories of the audience-text relationship, along the way encountering pliable audiences (media effects), active audiences (from uses and gratifications to various interventions of cultural studies), to what might be termed productive audiences (as users navigate through textual environments like games or even contribute textual elements in settings like YouTube). Given what I’ve just argued about an emerging recursive regime, it is worth speculating about an audience form that, for want of a better term, I’ll call algorithmic. Algorithmic audiences, to play this out, take two forms: as individually targeted receivers of texts; and as indirect generators of texts. In the first case, visible in the operations of companies such as Facebook and Cambridge Analytica, particular texts are vetted and directed to particular users based on assessments of the user's profile. One might say that the audience is algorithmically curated on a level approaching the individual. The textual composite the user sees – the mix of advertisements, news articles, postings from friends, and so on – bears little resemblance to what is “transmitted” (if one can even speak in these terms given the overabundance and non-sequential nature of the feed). Rather, past is prologue in a system in which accreted behaviours inform the textual selection process, and responses to that selection are incorporated in real time adjustments by a recursive system that privileges factors unknown to the user. To the extent that the system is responsive, strategic, and adaptive, it might be deemed intelligent. Alas, that intelligence serves another master, even if it exists in a constitutive relationship to its audience.
The second notion of this possible algorithmic audience goes a step further and generates texts-on-demand for the user. It is related in a way to the just-mentioned productive audience, particularly as manifest in the navigational work of interactive texts (or in the case of my lab at MIT, documentaries For a curated compendium of examples, see the MIT Open Documentary Lab ( For example, Karen Palmer's RIOT (2017+, found at
The curation of the audience-text relationship is already manifest in small and insidious ways, whether in the form of Google searches (curated in terms of language, geographical location, and doubtless many other markers), Facebook feeds, music prediction systems (Spotify), and literary and film and television tastes (Amazon, Netflix) (Uricchio 2017). This “personalisation” (to put it in the innocuous terms of the past) continues apace in the domain of more traditionally defined “texts”. In the realm of print, companies such as Narrative Science, Yseop, and Automated Insights mine and analyse data, using natural language processing to deliver it to the user (potentially on an individualised basis) as story. Although still primarily deployed in business settings and by the press for structured data sets (sports and finance), these systems are perfectly capable of reporting on sporting events with stories uniquely configured for each participant. Video-based story systems are fast catching up. This technology is developing quickly, as evidenced by the Stanford-Adobe “automated” video editing system, and in ways that require a critical stance, including advances in image and sound synthesis manifest in DeepFake.
Responsive textual systems with their algorithmic mediation of the audience pose a new order of questions across media forms. Even if – to the algorithm – the audience appears as both a highly individuated data set and a responsive rule set for textual construction, it nevertheless yields flesh-and-blood audiences and recursively produced real-world texts. It breaks the audience-text (and subject-world) binary by introducing an intermediary element that determines both – and in so doing, marks a new dynamic.
What might this mean for narrative? Audiences and producers of texts have drawn on narrative as a cultural coding protocol for a very long time, as Vladimir Propp demonstrated. Bibles, Korans, and the other great books suggest a deep history of narrative as bearers of cultural operating systems, encoding norms and values into textual systems that have survived centuries. It's no surprise, then, that media studies have long been interested in narrative; and that interest in turn has benefitted from well-established theories used in literature. Film and television, like the printed word, share a similar condition: the narrative is by default fixed to a physical medium, its sequence of events “hard wired” to a platform. Of course, it is possible to subvert that fixity, as demonstrated by examples from Cortazar's
These scenarios – whether algorithmically generated linear narratives for individual users or environments encouraging multiple narrative encounters – subvert narrative's historical role as a means of sharing the experience of others and as an experience to be shared in common with others. To the extent that societies store and transmit their operating systems in the form of stories across multiple generations and even millennia, what implications will responsive customisation and individualised texts have for the social cohesion of the future?
Despite Donald Trump's tiresome invocations of “fake news” when confronted with unflattering reports, legacy American print and broadcast news operations are generally accurate and empirically grounded, even if ideologically skewed. And no wonder, considering that from the sixteenth century onwards, religious, governance, and ultimately civic groups slowly accreted strategies for information verification and stabilisation (e.g., the imprimatur). But as with the previously mentioned issue of subject position and the algorithmic construction of audiences, texts, and stories, these inherited strategies are struggling to keep up with a new situation in which the provenance of online information is uncertain and correlations to events in the world are anyone's guess. Sure, fewer channels and centreto-periphery structures are easier to assess, characterise, and of course control than a networked free-for-all. But as we move ahead into this uncharted terrain, uncertainty seems all the more certain if we factor in the efforts of malevolent social actors and the steady advances of DeepFake technologies.
This condition of uncertainty takes three basic forms: uncertainty regarding what is real (an utterance's epistemological status in the age of digital manipulation); uncertainty regarding how users might assess this status (the source and logics behind the appearance of a particular utterance in a particular context); and uncertainty regarding implication (does the system “learn” from a user's pattern of likes, rejections, forwards, and contacts? And if so, with whom and to what end is that information shared, and how might it feed back into the flow of utterances that constitute the user's world? And does anyone else know what the user knows in an increasingly personalised world?). The good old days of Stuart Hall's dominant, negotiated, and oppositional reading positions, from this perspective, have a nostalgic glow.
The epistemological status of the emergent media system, the audiences they create, and the texts and stories that they trade in, is up for grabs. And this is the reason I have privileged a discussion of this recursive condition, rather than addressing crucially important issues having to do with the growing pressures of multicultural audiences, globalisation and post-coloniality, environmental concerns, and more. To the extent that media are part of the equation, these basic epistemic issues impinge upon the representation of all these problems.
What now? As media and communication studies approach late-middle age, At least from an American perspective, Film Studies penetrated the university in a systematic way in the 1960s, and the shift from the study of a medium to media studies (television) occurred in the late 1970s. Communication studies are significantly older, with roots at the turn of the century and institution-building in the 1920s and 1930s.
The close and critical reading of texts has a well-established tradition in film and television studies, and it's been interesting to observe the challenges faced by game scholars working with the variable textual utterances that emerge from the interaction of rule sets, assets, and player actions. The larger field needs to be attentive to what's working and what's not, as well as to which strategies might carry over to the domain of code and what I have referred to as the algorithmic systems in today's media. These elements and their operations need a nuanced descriptive apparatus, and it will require a radically expanded literacy if they are to be apprehended (see Montfort, 2016).
Part of this expanded literacy will require a shift from the familiar terrain of representation (where much work remains to be done, particularly in parsing modes like parody, satire, and “fakes”) to include the domain of recognition, that is, to the systems and protocols that selectively capture, correlate, and “apprehend” the world. What, precisely, do they “see”? Who designs these systems, and with what remit? How are these systems trained? What are system limits, and how can the field develop a descriptive vocabulary to take this on? Joy Buolamwini's work on racial discrimination in facial recognition algorithms offers an inspiring example of what is possible. See in particular her Algorithmic Justice League (
Reception research and a better understanding of how audiences are constructed, interpolated, and ultimately how they see themselves is essential as “personalised” media texts and delivery systems pervade the culture. The deeply qualitative audience research traditions associated with media studies offer an advantage with groups who are, as Sherry Turkle characterised them, “alone together”. User-experience designers, who assess a broader range of activities than semiotic engagements alone, will also be particularly interesting to watch and learn from.
Studies of the political economy of the media are urgently needed as concentration continues to grow in the legacy sector (where cross-ownership of media forms as well as vertical integration of content producers and distribution channels has been accelerating since the 1980s). But these massive (and frankly worrisome) mergers are outpaced by the market value of Microsoft, Apple, Amazon, and Alphabet, which top the charts. Globalisation adds a curious twist, not only in the form of competitive entities such as China's Tencent and Alibaba, but in terms of control over content if producers seek access to large markets (Chinese box office revenues significantly outpaced American returns for the first time in 2018 and the trend has only been growing, as has the state's insistence on information control).
Given these concerns, at least some elements of continuity will be useful as the field develops new ways to address profound conditional changes. And it is here that the Nordic approach offers a way to transcend the divides (humanities and social science; media and communication studies; theoretical and empirical approaches) that characterise too much of the field. Of course, a journal devoted to Nordic media studies will rightly address the concerns of a region, reflect a cultural perspective, and even embody an ethos. But the demands of the new situation I’ve sketched – the paradigm shift of an algorithmically-enabled recursive regime evident in social media and emerging narrative operations – can benefit significantly from the generative and more wholistic approach that distinguishes the Nordic from many of its Anglo-American and European peers. I see the