Paul O’Connor
Since Chat GPT-4 launched in November 2022, artificial intelligence has moved to the centre of public attention. From being a speculative technology, whose impacts might be felt at some distant point in the future, it has become an accelerating force with potential to upend social, economic, and political life in the near term.
Yet despite the avalanche of words spilled on the topic over the past two years, there is still huge uncertainty over the future trajectory of A.I., and its impacts. Informed commentators remain divided on the likely pace of developments. They range from those who predict the achievement of artificial general intelligence (A.G.I.) within as little as two years (see for example this detailed analysis by Leopold Aschenbrenner: https://situational-awareness.ai/), to those who believe inherent limitations of the Large Language Models mean improvements in A. I. will slow down dramatically (like the cognitive scientist Professor Gary Marcus: https://garymarcus.substack.com/). Does A.I. represent an existential threat to humanity, with the potential to subordinate or replace us, as several industry insiders seem to believe? Or will it inaugurate a golden age of enhanced productivity, solve the environmental crisis and supercharge medical advances, as its most enthusiastic advocates promise? Will A.I. replace huge swathes of the current labour market and ultimately make most humans redundant? Or is the current A.I. boom in part the product of spin designed to inflate the share prices and access to investment of the companies involved?
What can a cultural sociologist add to these discussions? I don’t possess the technical knowledge to make an informed judgement on the future potentialities of A.I. But the technology did not come out a void, nor does it operate within one. In this post, I want to discuss certain aspects of contemporary cultural and intellectual life which, I believe, are likely to magnify the impact of A.I., irrespective whether or not A.G.I. is achieved in the near future. My argument is that for decades the requirements of managerialism and the market have promoted an algorithmic type of thinking, which aims to make human behaviour – including intellectual and cultural activity – standardised, predictable and replicable. It will be simply the logical next step if most of our thinking is outsourced to machines.
Algorithmic Logic in Education
An algorithm is a methodical set of steps that can be used to make calculations, resolve problems or reach decisions. Algorithms are the building blocks of computer programming. But one of the defining features of hypermodern societies is the increasing subordination of social action to algorithmic logic – rules and procedures designed to render outcomes predictable, replicable, and conformable to bureaucratic-managerial goals and targets.
Let us take as an example a field readers of this blog will be familiar with – university teaching. Not so long ago, the expectation was that not only would a university lecture be delivered by a professor who was an authority in the field, but that it would bear the distinctive imprint of their individual research, reading, and thinking about the topic. Professors wrote their own lectures, which accordingly bore the stamp of an individual intellect and the interests and perspectives of a concrete person. Such lectures, in turn, were meant to be the starting point for a students’ own reading and exploration of a topic, leading to them develop a personal, yet informed and empirically-grounded, perspective. This remained the model when I did my undergraduate studies at University College Cork, just over twenty years ago.
However, since the 1990s – and greatly accelerating in recent decades – there has been a growing pressure on universities to define and measure learning outcomes. This has been driven by national and regional accreditation bodies, and in Europe by the Bologna Process (1999). Standardised assessment frameworks require each course to have a series of ‘course learning outcomes’ (CLOs), a numbered set of skills or attainments which students are to achieve. These course learning outcomes in turn should be aligned with the ‘program-level learning outcomes’ (PLOs) of the degree program, and the ‘institutional learning goals’ of the university. Each individual lecture should contribute to one or more CLOs, which should be specified at its beginning. Each assessment, likewise, should measure students’ attainment of specific CLOs. At the conclusion of a semester, instructors are required to conduct a ‘CLO assessment’, using the result of assignments to generate a quantitative measure of the degree to which students attained the course learning outcomes. In my institution, for example, it is required that at least 70% of students should score a minimum of 70% for each CLO; if this is not achieved, the instructor is required to provide an explanation and suggest remedial measures to ensure a better performance in the next semester.
This is a perfect example of what I call algorithmic logic, in that it represents an effort to mechanise education – acting as if it involved a fixed number of components which can be ordered within a sequence of prescribed steps, to achieve a pre-ordained outcome with maximal efficiency, measured in quantitative terms. The human persons involved – the professor and students – are treated as interchangeable parts of on assembly line, no different from a widget or a string of computer code. Such an approach might work for the transmission of a narrowly technical set of skills; it is entirely incompatible with any type of education which aims to cultivate critical thinking, understanding of oneself and one’s society, curiosity, creativity, a pleasure in learning, or any non-technical attribute in students. Yet, even humanistic and social scientific disciplines are increasingly forced to assimilate themselves to this managerial model of education. This is all driven by the requirements of university administrators, who are required to justify expenditure and ‘sell’ third level education, whether on the open market or to government and corporate funders. It helps greatly if they can claim their education programs provide definite, ideally ‘practical’ or ‘applied’, outcomes, and offer quantitative data to demonstrate that these outcomes are being achieved.
Once third level education is governed by this algorithmic logic, the individual personality and attributes of the professor become obstacles to the smooth attainment of the CLOs. For if a professor ‘wastes’ class time exploring aspects of a topic not mandated by the CLOs, offering a personal perspective, or engaging in discussion and debate outside a curricular framework which outlines, in advance, a limited number of positions, this may inhibit attainment of the CLOs. To avoid such a catastrophe, there is a tendency to replace lectures written by individual professors with textbooks, standardised course notes, assessments drawn from standardised ‘question banks’, or online lectures recorded by professors from prestigious international institutions.
If, in future, course delivery and assessment were delivered entirely by an A.I. – with the professor downgraded to a kind of invigilator to monitor and oversee students’ engagement with the material – this would simply represent a logical progression from current trends. From a managerial-bureaucratic perspective, the benefits would be significant. The university administration would have complete control of the material delivered; every minute of ‘learning time’ would be exclusively dedicated to the achievement of the CLOs; and they would be rigorously assessed, with the results fed back to the administration in real time, enabling detailed oversight and, where necessary, managerial intervention.
Of course, this would be a disaster for education. But arguably, the disaster has already happened. The deadening hand of managerialism stops the springs of life in everything it touches, leaving only a desiccated skeleton behind, and it has been busy about our universities for many decades. As a result, university teaching has been transformed into something that is ripe for substitution by artificial intelligence – in a way that would not previously have been the case, no matter how advanced the technology. For example, Foucault’s Collège de France lectures could no more have been replaced by an A.I. than they could have been confined within the framework of 4 to 5 CLOs. They are transcripts of the live unfolding of the thought of an individual personality, Michel Foucault. Even as he sketched the outline of his lectures at the beginning of a term, he could not fully know what novel directions his thought might take, what new insights he would achieve in the process of delivering them. Not every professor, obviously, is Foucault: but this was the model followed for centuries in every genuine lecture by every genuine professor, from the birth of the European university onwards. It is because the majority of universities have broken with this tradition, in little more than a generation, becoming subordinated to the algorithmic logic of managerialism, that third level education is ripe for colonisation by A.I. Conversely, it is only to the degree that individual universities or academic departments resist this logic that they have potential to survive being substituted by A.I.
Algorithmic Logic Everywhere
The algorithmic logic encapsulated by learning outcome assessment has penetrated most areas of intellectual and cultural life – making them ripe for substitution by A.I.
One may briefly mention academic research, now almost entirely shaped by funding agencies which require it to be aligned with the stated priorities of the institutional, governmental and intergovernmental bodies which provide the funding. This means researchers are effectively required to determine the results of their research before it has begun, because how else can they demonstrate how the outcome of their work will meet the goals of the funding agencies? This in turn fuels the obsession with ‘methodology’, now the most important section in any research proposal, since a detailed step-by-step specification of methods and procedures is the most effective reassurance which can be provided that a particular piece of research will achieve its pre-determined goals. Instead of open-ended, creative intellectual enquiry, pursued by concrete human beings in light of their own individual experience of being in the world, research is reduced to the increasingly mechanical following of standardised procedures to achieve preset outcomes. In other words, it obeys an algorithmic logic. Such research, and the equally mechanical ‘research papers’ thereby produced, would be better done by an A.I. than a human being – and soon will be.
Book publishing was once dominated by the individual judgement and gut instinct of editors about what made a good story, alloyed in many cases by their aesthetic preferences and a sense of cultural obligation. In recent decades, this has given way to decision-making dominated by marketing and accounts departments. Marketing departments analyse the potential sales of books based on factors such as their target demographic, market segment, the performance of comparable titles, and market trends in the popularity of different genres and themes, as well as an author’s past sales, media or online platform, and marketing potential. Accounts departments balance sales potential against production costs and marketing budgets to gauge potential profits and the risk of loss. These trends preceded digitalisation, but this has turbo-charged them, providing new streams of data from online retailers on what readers are searching for and buying, reviewing and commenting on. Editorial decision-making today occupies a diminishing space, hedged around by metrics derived from marketing or financial data, which increasingly determine what gets published.
Again, this provides the context in which the use of A.I. to help select books for publication (and indeed to write them) makes sense. Publishers are already using A.I. to analyse market trends; to evaluate manuscripts based on readability, narrative structure and predicted market appeal by comparing them to successful comparable titles; to assess the track record of literary agents; to predict a title’s performance based on pre-orders and early reader interest; and to determine pricing strategies, publishing formats and marketing approaches for books. As A.I. grows more powerful, it is safe to assume that its use in all these areas will expand. If publishing is no longer about the cultural worth or artistic value of a book, but simply about maximising sales through the prediction of market trends, this makes sense; A.I. can process a much greater quantity of data, much faster than humans, and pick out patterns a human editor would miss.
The logical next step – once the technology has developed further – is for A.I. to replace human authors. One can envisage that the same A.I. will both track market trends, based on real-time flows of data from online retailers like Amazon and review sites like Goodreads, and produce books which match those trends exactly. The A.I. will even be capable of tailoring manuscripts to the tastes of an individual reader, as revealed by their reading history on Kindle, Amazon purchases, online reviews and ratings. The A.I. will not be distracted from its task – the perfect matching of content with market trends – by such irrelevancies as artistic inspiration, a need for self-expression, a desire for creativity or originality, authorial interests and obsessions, political or moral convictions, or the random visitations of a muse. It will therefore be infinitely superior – from a marketing and management perspective – to any conceivable human author. Even Homer nods, but A.I. never sleeps.
Similar considerations as apply to book-publishing apply also to film and TV, as well as music production, so I won’t go into them here. But it is worth briefly considering what was formerly known as ‘journalism’. I say ‘formerly’, because here – as in academia – the disaster had already happened before A.I. appeared on the scene. In the case of journalism, the disaster was caused not only by managerialism, but the collapse of the established business model of journalism due to the internet and social media. Advertisers abandoned print newspapers and magazines in favour of online adds, while readers stopped buying them since they could access news and commentary for free online. With their revenue drastically reduced, traditional news organisations no longer had the money to fund in-depth investigative reporting, or employ teams of journalists with specialist knowledge of different fields. Consequently, a lot of reporting today consists of little more than copying and pasting the press releases and marketing materials sent out by politicians and companies, while news organisations fill time and space with shallow and predictable commentary from a range of talking heads.
Even as old-fashioned journalism has been hollowed out, there has been an explosion in demand for news, opinion, commentary and features - not to mention marketing copy – to feed a vastly expanded online mediascape. As a result, journalism has been largely replaced by the production of content. One can even talk of a broader ‘contentification of culture’ as a result of digitialisation. This captures a situation where all forms of cultural production are reshaped by the requirements of digital platforms and algorithms.
In the first place, the traditional boundaries between news, opinion, information, entertainment, marketing, educational material, and even literary and intellectual discourse are increasingly blurred as a result of their migration online. Whereas previously different types of material appeared in different publications, with distinct formats and conventions which were rigorously policed by editors and other gatekeepers, today everything is simply online, and often there are no editors, since anyone can publish anything through a blog, Youtube channel or on social media. In the second place, the internet generates a constant demand for new written and recorded material, produced rapidly, for instant publication and dissemination, most of which will be merely glanced at or skimmed over quickly rather than deeply read – and this further reinforces the trend towards a dissolution of traditional formats and boundaries. Thirdly, cultural production online is shaped by the desire for virality, and is optimised for search algorithms and engagement through clicks, likes and shares. This logic operates most powerfully on platforms like Youtube, TikTok or Spotify, where content is monetised and virality equates to profit. But even where cultural production is not for-profit, digital content is still subject to metrics such as likes, views and shares. Consequently, while the commodification of culture is nothing new, being a feature of the modern public arena from its origins, digital platforms impose a whole new set of pressures which shift cultural production in the direct of optimisation for engagement and virality. In summary, ‘content’ can be described as material produced to meet the needs of a digital marketplace, to fill virtual space, to occupy time, as opposed to an act of communication between a writer and an audience.
As such, content is inevitably standardised. Intellectual enquiry or artistic expression are subordinated to a variety of tricks designed to maximise its prospects for virality and accumulate likes and shares. This is the case whether the content in question is a news article or TikTok, Youtube video or Tweet, book or podcast, film or song. In other words, like third-level teaching and book publishing, content was already algorithmised, before A.I. came on the scene. As a result, content provision too is ripe for automation and replacement by A.I. Indeed, this is already happening, with A.I. automating the generation of financial reports, sports results, and election updates; A.I.-generated fiction and poetry; A.I. tools for screenwriting; and A.I. providing automated factchecking services for news agencies.
Conclusion
In the next few years it is likely that A.I. will revolutionise almost every field of cultural and intellectual life, from teaching to academic research, publishing to online content production, music to T.V. scriptwriting. At first this may be experienced as an increase in ‘productivity’, as researchers, writers and teachers use A.I. tools to get more done, faster. Rather quickly, however, we can expect to see workers in all these fields either replaced by A.I. or deprofessionalised and downgraded to auxiliary roles in the production and delivery of content.
My argument is that these consequences of A.I. are not just the result of the affordances of the technology itself, but of the increasingly algorithmic character of cultural and intellectual life. The algorithmisation of culture is the product of its marketisation, and of managerial types of organisation, where every project and activity must be explicitly aligned with formalised institutional goals and reportage and performance measurements are used to ensure conformity with these. This institutionalisation of ‘trickster logic’ is in turn a corollary of the liminal character of the global market and public arena.
Conversely, it is possible that the explosion of A.I.-generated content will expose the absurdity of the dominant contemporary models in academia and culture, and thereby undermine them. Genuinely authentic, human and individual writing and thinking may become more valued in a world where most cultural expression is an algorithmic hallucination. Authors with unique voices and the ability to tell a gripping story, journalists offering genuine insight into public affairs, or professors able to lecture compellingly based on their own reading, research and reflection, will likely still find audiences and carve out space for themselves. Culture may become increasingly divided, with ‘high culture’ produced by humans for a niche audience set off against a ‘mass culture’ which is almost entirely automated. Likewise, people may seek to escape the mechanisation and standardisation of culture promoted by A.I. through cultivating unmediated cultural experiences, revitalising localised traditions and ways of life, and engagement in face-to-face communities.