Archive for the ‘Marshall McLuhan’ Category
Algorithm politics & framing
– 7 June 2022 –
The biggest and most politically influential algorithm-based tech platforms (Facebook, Google, Twitter, Youtube, etc) have implemented measures designed to reduce things such as hate speech, “fake news”, bot-driven influence campaigns, etc – often at the expense of much criticism, since these measures, in many cases, seem heavy-handed and counterproductive. Some prominent critics argue that it amounts to “censorship”, citing, for example, the banning of Donald Trump from Twitter.
The framing of this debate quite often seems conceptually backward to me – with the now-prominent issues of “free speech” and “censorship” framed in ways still suggestive of legacy media gatekeeping “centres” (eg of publishing and broadcasting) – even though the debates concern mostly “decentralised” online media with algorithm gatekeeping – often featuring “censored” celebrities with access to multiple alternative platforms. Perhaps we have less of a “free speech problem”, and more of a “swamped by noise and disinformation problem”?
The viral spread of socially destructive content (the aforementioned hate speech, engineered fakery, botswarming, etc) has been blamed, by some, on the favoured business models of the big platforms. See, for example, my previous post describing Jaron Lanier’s critique of how these business models produce certain directives for the algorithms, which in turn blindly amplify the very worst aspects of humanity. Lanier argues that humanity will not survive the destructive social and political transformations being wrought – making replacement of the now-dominant B.U.M.M.E.R. business model as urgent an issue as climate breakdown).
But the owners of the big platforms love this business model because of its colossal profits – among other things. And so the main problem (according to Lanier and many others – eg see The Social Dilemma), ie the business model itself, isn’t tackled. Instead we get these stop-gap measures – bannings, suspensions, etc – which, to many people, look like clumsy, iron-booted, politically-biased “censorship”. The irony, for me, is that, in most cases, the owners of these platforms seemed motivated by libertarian notions of business. As Lanier wrote, “there was a libertarian wind blowing… We figured it would be wiser to let entrepreneurs fill in the blanks than to leave that task to government”.
‘At YouTube, I was working on YouTube recommendations. It worries me that an algorithm that I worked on is actually increasing polarization in society… The flat-Earth conspiracy theory was recommended hundreds of millions of times by the algorithm. It’s easy to think that it’s just a few stupid people who get convinced, but the algorithm is getting smarter and smarter every day.’
Guillaume Chaslot, The Social Dilemma
A.I. algorithms & McLuhan’s tetrad
Amid the recent noise about Elon Musk was his announced intention to make Twitter algorithms “open source” (ie available to public scrutiny, critique and improvement). If true, that seems pretty “huge” (the big media companies’ algorithms are apparently among the most tightly guarded secrets on the planet).
But Musk’s description of how this algorithm transparency would work sounds very much like the process of editing Wikipedia pages. I hope Elon reads Stephen Wolfram’s testimony to Congress on the subject, as Wolfram explains that what Musk proposes can effectively be considered “impossible”, due to the nature of current machine-learning systems: “For a well-optimized computation, there’s not likely to be a human-understandable narrative about how it works inside”. Wolfram proposes a different kind of solution to problems inherent with “monolithic AI” platforms: “Third Party Ranking Providers” and “Third Party Constraint Providers”.
Amid my own mental noise on the “negative” effects of those A.I. platforms (political chaos, nudged states of mind, etc) appears the notion that I should rethink more “positively” and “globally”. Or at least try to see different aspects of media evolution from various different perspectives – not just the ones which appear to have socially destructive trajectories.
Recall David Lynch’s scathing assessment of the new mobile media: “It’s such a sadness that you think you’ve seen a film on your fucking telephone. Get real”. But also note the simultaneously emerging “New Golden Age of TV” – an aspect of the same media evolution (eg streaming, on-demand, binge-watching). Those “critically acclaimed box-sets” – the quality and the depth of engagement seem off-the-scale.
Lynch’s own co-creation, Twin Peaks, heralded this Golden Age (David Chase, the creator of The Sopranos, cited Lynch as major influence/inspiration). I picture – probably inaccurately – reactionary TV execs (circa 1990), when faced with the success of Twin Peaks, thinking, “Do people really go for this ambiguous, depraved weirdo liberal bullshit? I thought they liked sensible stuff like Fox News!”
Meanwhile, what happened to Fox News? As a UK resident, I don’t see it on TV – I just see clips on social media – mostly of folks like Tulsi Gabbard, Glenn Greenwald and Michael Tracey guesting on Tucker Carlson’s show. It looks like the only “mainstream” TV in the western world in which Putin gets consistently better publicity than the US president. The latest clips I saw were of Carlson presenting a Fox show called ‘The End of Men’, in which he discusses “testicle tanning”. It almost makes Twin Peaks look mundane.
Some media mutations appear visible and obvious – eg from radio to television. Others not so much – especially more recent transformations. A.I. algorithm-driven mobile apps, and their dominant business models, can be considered something “other” than “the internet” – in many ways replacing original conceptions of “the web” (ie web pages on browsers running on desktops or laptops). The medium is the message, and if media mutation follows the rate of technological advance, how do we better understand the effects, social, political and otherwise, soon enough?
‘Photoshop didn’t have 1,000 engineers on the other side of the screen, using notifications, using your friends, using AI to predict what’s gonna perfectly addict you, or hook you, or manipulate you, or allow advertisers to test 60,000 variations of text or colors to figure out what’s the perfect manipulation of your mind. This is a totally new species of power and influence.’
Tristan Harris, The Social Dilemma
For one thing, the role of “user”/”audience”, ie YOU, has mutated – no longer the customer, more the raw material forming the product – but that seems one of the more obvious changes. Do we need improved ways of seeing to apprehend the changes in question? The old constructs for apprehending “let us down” – unless we first recognise them as such (Ye Olde Metaphorical Constructions), and then perhaps re-perceive as kitsch or art. (Or, in Steve Bannon’s case, as networked political warfare – see below).
Marshall McLuhan’s tetrad seems a good starting point, as it yields a more “meta-” view of media, among other things. For an insightful guide to the tetrad and the current relevance of McLuhan (which also has a lot of fun, up-to-date examples), I recommend Paul Levinson’s ‘McLuhan in an Age of Social Media’ – a self-contained update to Paul’s ‘Digital McLuhan’.
‘The tetrad, in a nutshell, is a way of mapping the impact and interconnections of technologies across time. It asks four questions of every medium or technology: What does it enhance or amplify? What does it obsolesce or push out of the sunlight and into the shade. What does it retrieve or bring back into central attention and focus – something which itself had previously been obsolesced. And what does the new medium, when pushed to its limits, reverse or flip into?’
– Paul Levinson, ‘McLuhan in an Age of Social Media’
Steve Bannon’s project & the tetrad
We perhaps forget that as well as being White House strategist and Trump’s advisor, Bannon helped run Breitbart News and Cambridge Analytica, and has spent his time networking his “far-right” political cause with a wide array of global influencers (eg Nigel Farage and George Galloway, to give two examples in the UK). Bannon made a fortune from investing fairly early in the successful US comedy show, Seinfeld. And as The Guardian put it, “Bannon’s wealth smoothed his path from finance to media and politics”.
To speculate on Bannon’s activities in terms of McLuhan’s tetrad, I refer to what I’ve previously documented – that Bannon adopted some old “leftwing” tropes, which he sheared of specifics (making them “cooler” in McLuhan’s terms), for appealing to a younger audience. He regarded Fox’s audience at the time as “geriatric”. Bannon had studied the output of people such as Michael Moore to see what “worked”, and he’d recognised the power that could be wielded by the huge online communities of alienated young people (audiences of sites such as Breitbart).
To quote Devil’s Bargain (by Joshua Green), Bannon “envisioned a great fusion between the masses of alienated gamers, so powerful in the online world, and the right-wing outsiders drawn to Breitbart by its radical politics and fuck-you attitude”.
Using McLuhan’s four-part tetrad “probe”, we can consider what Bannon’s project Enhances, Obsolesces, Retrieves and Reverses, in political-media terms. One obvious retrieval is what I describe above – Bannon’s project retrieved old ‘left’ tropes – binary political frames/categories, such as:
- Anti-establishment vs Establishment
- Ordinary folk vs Elite
- Outsiders vs Corporate Media
- Unjustly maligned “official enemies” vs Malign US Deep State
Tied to their original, left-ideological “hot” specificity, these tropes might seem inadequate for making sense of the fast-moving fractal-like chaos and complexity of 21st century political culture. But Bannon et al, I think, realised their “cool” effectiveness when used in non-specific populist expression – the kind tweeted by Trump, for example. (Prof Levinson writes that this non-specificity in Trump’s tweets – inviting people to interact and “fill in the gaps” – makes Trump’s communication “cool” in McLuhan’s jargon).
In terms of the tetrad, this enhances the revolutionary fervour of, say, anti-establishment protests (or, alternatively, you can see it as enhancing the angry rabble-rousing of demagogues). It obsolesces the “geriatric” aspects of the conservative right that Bannon saw as an impediment. It reverses certain traditional conservative moral associations with conventional “authority”, which perhaps flips into adoration of “strong” “maverick” types. (Frank Luntz has also worked on this reversal – with his advice to conservatives to always blame everything on “Washington” “D.C.” “establishment” authority).
More tetrad speculation: much (but not all) of the above seemed, for Bannon, about getting a younger online demographic into his “alt-right” vision. It also appears to enhance sweeping generalisations and either/or thinking – due to the binary nature of the original tropes, now shorn of specifics, and presented in “cool” (but ironically demagogic) soundbites. Social media algorithms, designed to maximise engagement, appear to promote content with the type of characteristics that happen to be enhanced by Bannon’s media strategy.
Sub-optimal political framing
Old-school (pre-social media) framing of “censorship”, “silencing”, “suppression” (of free speech), etc, seemed prominent in anti-war critiques of media coverage of the 2003 Iraq war – with good reason. But this framing doesn’t work so well when applied to newer “horizontal” networked media structures. War coverage has mutated significantly as a result of social media. In the current conflict in Ukraine, every Russian tank, truck and troop movement gets tracked and added to open source databases, through smartphones, etc – a country of 44 million people recording every move of the invaders, with real-time data (source: Guardian article).
When the framing used in anti-war critiques of earlier (eg 2003) media gets re-applied to the current Ukraine conflict – as if the lessons of earlier media failures can be re-applied without much modification – we end up with strange distortions and misperceptions. For a case in point, take this article by Matt Taibbi. It begins with the intelligent point that Russia’s invasion of Ukraine has presented us with a terrible dilemma but then, halfway through, Taibbi performs a curious mis-framing with “war critic”, “anti-war” and “anti-interventionist” labels, regarding media coverage:-
‘Before “de-platforming” was even a term in the American consciousness, our corporate press perfected it with regard to war critics… [Matt then gives detail on exclusion of “Anti-war voices” from 2003 Iraq War coverage]
‘Since then, we’ve only widened the rhetorical no-fly zone. In a development that back then I would have bet my life on never happening, anti-interventionist voices or advocates for such people are increasingly confined to Fox if they appear on major corporate media at all.’ – Matt Taibbi, America’s Intellectual No-Fly Zone
That seems weird to me, as it implies that corporate media opposition to, and criticism of, the major war under discussion (Russia’s military intervention in Ukraine) exists practically nowhere but on Fox News! Taibbi’s comments make logical sense to me only if I assume at least one of the following:
- I’m hallucinating the wall-to-wall media opposition to a major aggressive war currently being waged.
- Matt Taibbi doesn’t see Russia invading Ukraine – he sees some other war that isn’t being opposed/criticised in current media coverage.
- The terms “war critic”, “anti-interventionist” and “anti-war” have a special, qualified meaning for Taibbi, which he doesn’t specify.
(Actually, I see the “real” problem here as something Nassim Taleb has alluded to – outdated media models tied into a sort of logical fallacy. Incidently, Taleb, not so long ago, supported people like Glenn Greenwald, Tulsi Gabbard, et al, but has recently taken to denouncing them on Twitter – he’s repeatedly called Greenwald and Edward Snowden “frauds”, and his criticism along these lines extends to folks like Caitlin Johnstone and even Elon Musk. “Fraud” seems an over-the-top allegation to me – I prefer to think of these folks as using various outdated top-down constructs for media, “censorship”, “surveillance”, etc, while simultaneously using sometimes-valid notions of political corruption in “liberal” “establishments” – the latter to populist appeal; the former confusing the media issues. I’m trying to be charitable and diplomatic here!).
Orwell retrieved & obsolesced

Orwell quotes (often in the form of memes) currently seem a popular way to frame 21st century political and media scenarios. The above-mentioned Matt Taibbi piece uses Orwell quotes in this way, and cites Chomsky as “often” using them in a similar way. But the language here seems curiously anachronistic when you consider what Taibbi and Chomsky refer to.
Taibbi asks Chomsky about the negative responses on social media to Chomsky’s recent remarks on Russia/Ukraine. The MIT professor replies that it’s normal for “doctrinal managers” to condemn people who “don’t keep rigidly to the Party Line”. Taibbi cites Orwell’s view that “free societies suppress thought almost as effectively as the totalitarian Soviets”, and quotes Orwell saying certain inconvenient views are not “entitled to a hearing”.
I’ve looked at a lot of the negative responses to Chomsky’s Ukraine remarks – including the ones that Taibbi links to. I don’t see “doctrinal managers” or a “Party Line”. I see a lot of individuals on social media posting various (quite diverse) criticisms of Chomsky’s remarks. I see neither “suppression” of thought, nor any speech denied its “entitlement” to “a hearing”. (A typical example of the recent harsh critiques of Chomsky is this Twitter thread, which was retweeted by the journalist George Monbiot).
Orwell’s Animal Farm was published in 1945. His views on “suppression” of thought and speech reflect the media forms of the time. Similarly, much of the language Chomsky uses on political media dates back to his Manufacturing Consent (1988) – effectively pre-internet.
Nassim Taleb commented recently on the Orwell meme pictured (above right). He wrote: “exactly 100% backwards”, adding:
‘In 1984, there was no web; governments had total control of information. In 2022 things are more transparent, so we see imperfections. THE TRANSPARENCY EFFECT: the more things improve the worse they look.’
– Nassim Taleb, on Twitter
“Surveillance” frame & the new hypocognition
“Surveillance”, like “censorship”, tends to get framed in a way that implies vertical power hierarchies. And while still obviously valid for human political institutions, this framing seems inadequate for the new and increasingly dominant algorithmic, machine-learning, “decentralised” media technology. To the extent we continue to use established (and thus comfortable) but anachronistic (for media) frames, we miss the significance of newer, mutated “interventions” that operate on more “horizontally” structured media – continuous, dynamic (minimum 2-way) demographically-optimised micro-interaction data-mining/profiling and algorithmic behavioural nudging, using sophisticated machine-learning systems on mobile biometric supercomputers (aka smartphones).
I’m pretending to understand it by lining up a lot of words. The point for me is that hardly anyone seems to understand the new algorithm-media interventions and their social/political effects. It seems to be a problem of what the cognitive linguists call “hypocognition” – we just don’t have adequate semantic frames, or visual imagery, to comprehend and discuss it properly, “as a society”, yet.
To illustrate this point using the frame of “surveillance”, recall what happened when Edward Snowden’s NSA surveillance leaks hit the press in 2014. It made huge news and was widely discussed. Most people already had the cognitive frames available to understand that kind of surveillance – top-down government surveillance. Those frames have been around a long time and already seemed an established part of popular culture – Orwell’s Big Brother, brought up to date by TV shows such as ’24’, which showed government spying on their own citizens in “real time” using incredible technology.
That much we understand. Now try to picture what Cambridge Analytica did. Try to describe it in a way you might discuss with your friends or family. A kind of surveillance, a kind of political/social “influence”, using social media – but not in the easily comprehended way of what Snowden revealed (which most us probably already suspected and had mental imagery and verbal frames for).
Of course, the fact that we have difficulty understanding and discussing it doesn’t mean it’s going away. And I imagine its funders (and various influential others) have noted some of its “successes” in nudging politics, and various social phenomena statistically.
‘It’s the gradual, slight, imperceptible change in your own behavior and perception that is the product… That’s the only thing there is for them to make money from. Changing what you do, how you think, who you are. It’s a gradual change. It’s slight. If you can go to somebody and you say, “Give me $10 million, and I will change the world one percent in the direction you want it to change…” It’s the world! That can be incredible, and that’s worth a lot of money.’
Jaron Lanier, The Social Dilemma
(Incidentally, my bank recently notified me that they’re “improving security” by introducing “behavioural biometric” checks for online payments: “We’re not actually checking your email address; it’s how you enter it that matters, including your keystrokes. It’s known as ‘behavioural biometric’ data and it should be unique to you.”).
Written by NewsFrames
June 7, 2022 at 4:09 pm
Posted in Marshall McLuhan, Media criticism, Orwell, Social media algorithms