N E W S • F R A M E S • • • • •

About media framing • (written by Brian Dean)

Archive for the ‘Orwell’ Category

Algorithm politics & framing

– 7 June 2022

The biggest and most politically influential algorithm-based tech platforms (Facebook, Google, Twitter, Youtube, etc) have implemented measures designed to reduce things such as hate speech, “fake news”, bot-driven influence campaigns, etc – often at the expense of much criticism, since these measures, in many cases, seem heavy-handed and counterproductive. Some prominent critics argue that it amounts to “censorship”, citing, for example, the banning of Donald Trump from Twitter.

The framing of this debate quite often seems conceptually backward to me – with the now-prominent issues of “free speech” and “censorship” framed in ways still suggestive of legacy media gatekeeping “centres” (eg of publishing and broadcasting) – even though the debates concern mostly “decentralised” online media with algorithm gatekeeping – often featuring “censored” celebrities with access to multiple alternative platforms. Perhaps we have less of a “free speech problem”, and more of a “swamped by noise and disinformation problem”?

The viral spread of socially destructive content (the aforementioned hate speech, engineered fakery, botswarming, etc) has been blamed, by some, on the favoured business models of the big platforms. See, for example, my previous post describing Jaron Lanier’s critique of how these business models produce certain directives for the algorithms, which in turn blindly amplify the very worst aspects of humanity. Lanier argues that humanity will not survive the destructive social and political transformations being wrought – making replacement of the now-dominant B.U.M.M.E.R. business model as urgent an issue as climate breakdown).

But the owners of the big platforms love this business model because of its colossal profits – among other things. And so the main problem (according to Lanier and many others – eg see The Social Dilemma), ie the business model itself, isn’t tackled. Instead we get these stop-gap measures – bannings, suspensions, etc – which, to many people, look like clumsy, iron-booted, politically-biased “censorship”. The irony, for me, is that, in most cases, the owners of these platforms seemed motivated by libertarian notions of business. As Lanier wrote, “there was a libertarian wind blowing… We figured it would be wiser to let entrepreneurs fill in the blanks than to leave that task to government”.

‘At YouTube, I was working on YouTube recommendations. It worries me that an algorithm that I worked on is actually increasing polarization in society… The flat-Earth conspiracy theory was recommended hundreds of millions of times by the algorithm. It’s easy to think that it’s just a few stupid people who get convinced, but the algorithm is getting smarter and smarter every day.’

Guillaume Chaslot, The Social Dilemma

A.I. algorithms & McLuhan’s tetrad

Amid the recent noise about Elon Musk was his announced intention to make Twitter algorithms “open source” (ie available to public scrutiny, critique and improvement). If true, that seems pretty “huge” (the big media companies’ algorithms are apparently among the most tightly guarded secrets on the planet).

But Musk’s description of how this algorithm transparency would work sounds very much like the process of editing Wikipedia pages. I hope Elon reads Stephen Wolfram’s testimony to Congress on the subject, as Wolfram explains that what Musk proposes can effectively be considered “impossible”, due to the nature of current machine-learning systems: “For a well-optimized computation, there’s not likely to be a human-understandable narrative about how it works inside”. Wolfram proposes a different kind of solution to problems inherent with “monolithic AI” platforms: “Third Party Ranking Providers” and “Third Party Constraint Providers”.

Amid my own mental noise on the “negative” effects of those A.I. platforms (political chaos, nudged states of mind, etc) appears the notion that I should rethink more “positively” and “globally”. Or at least try to see different aspects of media evolution from various different perspectives – not just the ones which appear to have socially destructive trajectories.

Recall David Lynch’s scathing assessment of the new mobile media: “It’s such a sadness that you think you’ve seen a film on your fucking telephone. Get real”. But also note the simultaneously emerging “New Golden Age of TV” – an aspect of the same media evolution (eg streaming, on-demand, binge-watching). Those “critically acclaimed box-sets” – the quality and the depth of engagement seem off-the-scale.

Lynch’s own co-creation, Twin Peaks, heralded this Golden Age (David Chase, the creator of The Sopranos, cited Lynch as major influence/inspiration). I picture – probably inaccurately – reactionary TV execs (circa 1990), when faced with the success of Twin Peaks, thinking, “Do people really go for this ambiguous, depraved weirdo liberal bullshit? I thought they liked sensible stuff like Fox News!”

Meanwhile, what happened to Fox News? As a UK resident, I don’t see it on TV – I just see clips on social media – mostly of folks like Tulsi Gabbard, Glenn Greenwald and Michael Tracey guesting on Tucker Carlson’s show. It looks like the only “mainstream” TV in the western world in which Putin gets consistently better publicity than the US president. The latest clips I saw were of Carlson presenting a Fox show called ‘The End of Men’, in which he discusses “testicle tanning”. It almost makes Twin Peaks look mundane.

Some media mutations appear visible and obvious – eg from radio to television. Others not so much – especially more recent transformations. A.I. algorithm-driven mobile apps, and their dominant business models, can be considered something “other” than “the internet” – in many ways replacing original conceptions of “the web” (ie web pages on browsers running on desktops or laptops). The medium is the message, and if media mutation follows the rate of technological advance, how do we better understand the effects, social, political and otherwise, soon enough?

‘Photoshop didn’t have 1,000 engineers on the other side of the screen, using notifications, using your friends, using AI to predict what’s gonna perfectly addict you, or hook you, or manipulate you, or allow advertisers to test 60,000 variations of text or colors to figure out what’s the perfect manipulation of your mind. This is a totally new species of power and influence.’

Tristan Harris, The Social Dilemma

For one thing, the role of “user”/”audience”, ie YOU, has mutated – no longer the customer, more the raw material forming the product – but that seems one of the more obvious changes. Do we need improved ways of seeing to apprehend the changes in question? The old constructs for apprehending “let us down” – unless we first recognise them as such (Ye Olde Metaphorical Constructions), and then perhaps re-perceive as kitsch or art. (Or, in Steve Bannon’s case, as networked political warfare – see below).

Marshall McLuhan’s tetrad seems a good starting point, as it yields a more “meta-” view of media, among other things. For an insightful guide to the tetrad and the current relevance of McLuhan (which also has a lot of fun, up-to-date examples), I recommend Paul Levinson’s ‘McLuhan in an Age of Social Media’ – a self-contained update to Paul’s ‘Digital McLuhan’.

‘The tetrad, in a nutshell, is a way of mapping the impact and interconnections of technologies across time. It asks four questions of every medium or technology: What does it enhance or amplify? What does it obsolesce or push out of the sunlight and into the shade. What does it retrieve or bring back into central attention and focus – something which itself had previously been obsolesced. And what does the new medium, when pushed to its limits, reverse or flip into?’

Paul Levinson, ‘McLuhan in an Age of Social Media’

Steve Bannon’s project & the tetrad

We perhaps forget that as well as being White House strategist and Trump’s advisor, Bannon helped run Breitbart News and Cambridge Analytica, and has spent his time networking his “far-right” political cause with a wide array of global influencers (eg Nigel Farage and George Galloway, to give two examples in the UK). Bannon made a fortune from investing fairly early in the successful US comedy show, Seinfeld. And as The Guardian put it, “Bannon’s wealth smoothed his path from finance to media and politics”.

To speculate on Bannon’s activities in terms of McLuhan’s tetrad, I refer to what I’ve previously documented – that Bannon adopted some old “leftwing” tropes, which he sheared of specifics (making them “cooler” in McLuhan’s terms), for appealing to a younger audience. He regarded Fox’s audience at the time as “geriatric”. Bannon had studied the output of people such as Michael Moore to see what “worked”, and he’d recognised the power that could be wielded by the huge online communities of alienated young people (audiences of sites such as Breitbart).

To quote Devil’s Bargain (by Joshua Green), Bannon “envisioned a great fusion between the masses of alienated gamers, so powerful in the online world, and the right-wing outsiders drawn to Breitbart by its radical politics and fuck-you attitude”.

Using McLuhan’s four-part tetrad “probe”, we can consider what Bannon’s project Enhances, Obsolesces, Retrieves and Reverses, in political-media terms. One obvious retrieval is what I describe above – Bannon’s project retrieved old ‘left’ tropes – binary political frames/categories, such as:

  • Anti-establishment vs Establishment
  • Ordinary folk vs Elite
  • Outsiders vs Corporate Media
  • Unjustly maligned “official enemies” vs Malign US Deep State

Tied to their original, left-ideological “hot” specificity, these tropes might seem inadequate for making sense of the fast-moving fractal-like chaos and complexity of 21st century political culture. But Bannon et al, I think, realised their “cool” effectiveness when used in non-specific populist expression – the kind tweeted by Trump, for example. (Prof Levinson writes that this non-specificity in Trump’s tweets – inviting people to interact and “fill in the gaps” – makes Trump’s communication “cool” in McLuhan’s jargon).

In terms of the tetrad, this enhances the revolutionary fervour of, say, anti-establishment protests (or, alternatively, you can see it as enhancing the angry rabble-rousing of demagogues). It obsolesces the “geriatric” aspects of the conservative right that Bannon saw as an impediment. It reverses certain traditional conservative moral associations with conventional “authority”, which perhaps flips into adoration of “strong” “maverick” types. (Frank Luntz has also worked on this reversal – with his advice to conservatives to always blame everything on “Washington” “D.C.” “establishment” authority).

More tetrad speculation: much (but not all) of the above seemed, for Bannon, about getting a younger online demographic into his “alt-right” vision. It also appears to enhance sweeping generalisations and either/or thinking – due to the binary nature of the original tropes, now shorn of specifics, and presented in “cool” (but ironically demagogic) soundbites. Social media algorithms, designed to maximise engagement, appear to promote content with the type of characteristics that happen to be enhanced by Bannon’s media strategy.

Sub-optimal political framing

Old-school (pre-social media) framing of “censorship”, “silencing”, “suppression” (of free speech), etc, seemed prominent in anti-war critiques of media coverage of the 2003 Iraq war – with good reason. But this framing doesn’t work so well when applied to newer “horizontal” networked media structures. War coverage has mutated significantly as a result of social media. In the current conflict in Ukraine, every Russian tank, truck and troop movement gets tracked and added to open source databases, through smartphones, etc – a country of 44 million people recording every move of the invaders, with real-time data (source: Guardian article).

When the framing used in anti-war critiques of earlier (eg 2003) media gets re-applied to the current Ukraine conflict – as if the lessons of earlier media failures can be re-applied without much modification – we end up with strange distortions and misperceptions. For a case in point, take this article by Matt Taibbi. It begins with the intelligent point that Russia’s invasion of Ukraine has presented us with a terrible dilemma but then, halfway through, Taibbi performs a curious mis-framing with “war critic”, “anti-war” and “anti-interventionist” labels, regarding media coverage:-

‘Before “de-platforming” was even a term in the American consciousness, our corporate press perfected it with regard to war critics… [Matt then gives detail on exclusion of “Anti-war voices” from 2003 Iraq War coverage]
‘Since then, we’ve only widened the rhetorical no-fly zone. In a development that back then I would have bet my life on never happening, anti-interventionist voices or advocates for such people are increasingly confined to Fox if they appear on major corporate media at all.’Matt Taibbi, America’s Intellectual No-Fly Zone

That seems weird to me, as it implies that corporate media opposition to, and criticism of, the major war under discussion (Russia’s military intervention in Ukraine) exists practically nowhere but on Fox News! Taibbi’s comments make logical sense to me only if I assume at least one of the following:

  1. I’m hallucinating the wall-to-wall media opposition to a major aggressive war currently being waged.
  2. Matt Taibbi doesn’t see Russia invading Ukraine – he sees some other war that isn’t being opposed/criticised in current media coverage.
  3. The terms “war critic”, “anti-interventionist” and “anti-war” have a special, qualified meaning for Taibbi, which he doesn’t specify.

(Actually, I see the “real” problem here as something Nassim Taleb has alluded to – outdated media models tied into a sort of logical fallacy. Incidently, Taleb, not so long ago, supported people like Glenn Greenwald, Tulsi Gabbard, et al, but has recently taken to denouncing them on Twitter – he’s repeatedly called Greenwald and Edward Snowden “frauds”, and his criticism along these lines extends to folks like Caitlin Johnstone and even Elon Musk. “Fraud” seems an over-the-top allegation to me – I prefer to think of these folks as using various outdated top-down constructs for media, “censorship”, “surveillance”, etc, while simultaneously using sometimes-valid notions of political corruption in “liberal” “establishments” – the latter to populist appeal; the former confusing the media issues. I’m trying to be charitable and diplomatic here!).

Orwell retrieved & obsolesced

Orwell quotes (often in the form of memes) currently seem a popular way to frame 21st century political and media scenarios. The above-mentioned Matt Taibbi piece uses Orwell quotes in this way, and cites Chomsky as “often” using them in a similar way. But the language here seems curiously anachronistic when you consider what Taibbi and Chomsky refer to.

Taibbi asks Chomsky about the negative responses on social media to Chomsky’s recent remarks on Russia/Ukraine. The MIT professor replies that it’s normal for “doctrinal managers” to condemn people who “don’t keep rigidly to the Party Line”. Taibbi cites Orwell’s view that “free societies suppress thought almost as effectively as the totalitarian Soviets”, and quotes Orwell saying certain inconvenient views are not “entitled to a hearing”.

I’ve looked at a lot of the negative responses to Chomsky’s Ukraine remarks – including the ones that Taibbi links to. I don’t see “doctrinal managers” or a “Party Line”. I see a lot of individuals on social media posting various (quite diverse) criticisms of Chomsky’s remarks. I see neither “suppression” of thought, nor any speech denied its “entitlement” to “a hearing”. (A typical example of the recent harsh critiques of Chomsky is this Twitter thread, which was retweeted by the journalist George Monbiot).

Orwell’s Animal Farm was published in 1945. His views on “suppression” of thought and speech reflect the media forms of the time. Similarly, much of the language Chomsky uses on political media dates back to his Manufacturing Consent (1988) – effectively pre-internet.

Nassim Taleb commented recently on the Orwell meme pictured (above right). He wrote: “exactly 100% backwards”, adding:

‘In 1984, there was no web; governments had total control of information. In 2022 things are more transparent, so we see imperfections. THE TRANSPARENCY EFFECT: the more things improve the worse they look.’

– Nassim Taleb, on Twitter

“Surveillance” frame & the new hypocognition

“Surveillance”, like “censorship”, tends to get framed in a way that implies vertical power hierarchies. And while still obviously valid for human political institutions, this framing seems inadequate for the new and increasingly dominant algorithmic, machine-learning, “decentralised” media technology. To the extent we continue to use established (and thus comfortable) but anachronistic (for media) frames, we miss the significance of newer, mutated “interventions” that operate on more “horizontally” structured media – continuous, dynamic (minimum 2-way) demographically-optimised micro-interaction data-mining/profiling and algorithmic behavioural nudging, using sophisticated machine-learning systems on mobile biometric supercomputers (aka smartphones).

I’m pretending to understand it by lining up a lot of words. The point for me is that hardly anyone seems to understand the new algorithm-media interventions and their social/political effects. It seems to be a problem of what the cognitive linguists call “hypocognition” – we just don’t have adequate semantic frames, or visual imagery, to comprehend and discuss it properly, “as a society”, yet.

To illustrate this point using the frame of “surveillance”, recall what happened when Edward Snowden’s NSA surveillance leaks hit the press in 2014. It made huge news and was widely discussed. Most people already had the cognitive frames available to understand that kind of surveillance – top-down government surveillance. Those frames have been around a long time and already seemed an established part of popular culture – Orwell’s Big Brother, brought up to date by TV shows such as ’24’, which showed government spying on their own citizens in “real time” using incredible technology.

That much we understand. Now try to picture what Cambridge Analytica did. Try to describe it in a way you might discuss with your friends or family. A kind of surveillance, a kind of political/social “influence”, using social media – but not in the easily comprehended way of what Snowden revealed (which most us probably already suspected and had mental imagery and verbal frames for).

Of course, the fact that we have difficulty understanding and discussing it doesn’t mean it’s going away. And I imagine its funders (and various influential others) have noted some of its “successes” in nudging politics, and various social phenomena statistically.

‘It’s the gradual, slight, imperceptible change in your own behavior and perception that is the product… That’s the only thing there is for them to make money from. Changing what you do, how you think, who you are. It’s a gradual change. It’s slight. If you can go to somebody and you say, “Give me $10 million, and I will change the world one percent in the direction you want it to change…” It’s the world! That can be incredible, and that’s worth a lot of money.’

Jaron Lanier, The Social Dilemma


(Incidentally, my bank recently notified me that they’re “improving security” by introducing “behavioural biometric” checks for online payments: “We’re not actually checking your email address; it’s how you enter it that matters, including your keystrokes. It’s known as ‘behavioural biometric’ data and it should be unique to you.”).

Written by NewsFrames

June 7, 2022 at 4:09 pm

Framing vs “Orwellian language”

big-brother-newsframes-smApril 24, 2014Orwell’s fiction ‘memes’Newspeak, doublethink, Big Brother, etc – still sound resonant to me, but his famous essay, Politics and the English Language, seems outdated (and wrong) in important respects. Of course, you can’t blame Orwell for not knowing what cognitive science and neuroscience would discover after his death – most living people still have no idea how those fields have changed our understanding of language and the mind over the last 35 years.

Orwell’s essay is premised on a view of reason that comes from the Enlightenment. It’s a widespread view that’s “reflexively” still promoted not just by the “liberal-left” media and commentariat, but also by the Chomskyan “radical left”. And, as George Lakoff and others have been at pains to point out, it’s a view of reason which now seems totally wrong – given what the cognitive/neuroscience findings tell us.

I’ll return to Orwell in a moment, but, first: Why does the Enlightenment view of reason seem wrong? Well, it’s an 18th-Century outlook which takes reason to be conscious, universal, logical, literal (ie fits the world directly), unemotional, disembodied and interest-based (Enlightenment rationalism assumes that everyone is rational and that rationality serves self-interest). It follows from this viewpoint that you only need to tell people the facts in clear language, and they’ll reason to the right, true conclusions. As Lakoff puts it, “The cognitive and brain sciences have shown this is false… it’s false in every single detail.”

From the discoveries promoted by the cog/neuro-scientists, we find that reason is mostly unconscious (around 98% unconscious, apparently). We don’t know our own system of concepts. Much of what we regard as conceptual inference (or “logic”) arises, unconsciously, from basic metaphors whose source is the sensory and motor activities of our nervous systems. Also, rationality requires emotion, which itself can be unconscious. We always think using frames, and every word is understood in relation to a cognitive frame. The neural basis of reasoning is not literal or logical computation; it entails frames, metaphors, narratives and images.

So, of course: we have different worldviews – not universal reason. It seems obvious, but needs repeating: We don’t all think the same – only a part of our conceptual systems can be considered universal. So-called “conservatives” and “progressives” don’t see the world in the same way; they have different forms of reason on moral issues. But they both see themselves as right, in a moral sense (with perhaps a few “amoral” exceptions).

Many on the left apparently find this difficult to comprehend. Given the Enlightenment premise of universal reason, they think everyone should be able to reason to the conclusion that conservative (or “Capitalist”) positions are immoral. All that’s needed, they believe, is to tell people the unadorned facts, the “truth”. And if people won’t reason to the correct moral conclusions after being presented with the facts, that must imply they are either immoral or “brainwashed”, hopelessly confused or “pathological”.

Few people have exclusively “conservative” or exclusively “progressive” views on everything. We all seem to have both modes of moral reasoning in our brains. (The words “conservative” and “progressive” may seem somewhat arbitrary, inadequate categories, but the distinct “moral” cognitive systems which they point to seem far from arbitrary – see Lakoff’s Moral Politics). You can think “progressively” in one subject area and “conservatively” in others, and vice-versa. And you might not be aware that you’re switching back and forth. It’s called “mutual inhibition” – where two structures in the brain neurally inhibit each other. If one is active, it will deactivate the other, and vice-versa. To give a crude example, constant activation of “conservative” framing on, say, the issue of welfare (eg the “benefit cheats” frame) will tend to inhibit the more “progressive” mode of thought in that whole subject area.

It’s a fairly common experience for me to chat with someone who seems rational, decent, friendly, etc; and then they suddenly come out with what I regard as a “shocking” rightwing view – something straight out of, say, UKIP – a view which they obviously believe in sincerely. This shouldn’t be surprising given the statistical popularity of the Daily Mail, Express, UKIP, etc, but it always conveys to me – in a ‘visceral’ way – the inadequacy of certain left/liberal assumptions about how reasonable, “ordinary” (as opposed to “elite”) people are “supposed” to think.

Orwell’s ‘Politics and the English Language’

To return to Orwell and his essay – he writes that certain misuses of language promote a nefarious status quo in politics. For example, he argues that “pretentious diction” is used to “dignify the sordid process of international politics”. He says that “meaningless words” such as “democracy” and “patriotic” are often used in a consciously dishonest way with “intent to deceive”. The business of political writing is one of “swindles and perversions”; it is the “debasement of language”. For Orwell, it is “broadly true that political writing is bad writing”, and political language “has to consist largely of euphemism, question-begging and sheer cloudy vagueness”.

Much of this still seems valid (nearly 70 years after Orwell wrote it) – and some of the examples of official gibberish that Orwell cites are as amusing as what you might see in today’s political/bureaucratic gobbledygook. But it’s the cure that Orwell proposes which embodies the Enlightenment fallacy (and which Lakoff, for example, has described as “naive and dangerous”):

What is above all needed is to let the meaning choose the word, and not the other way around. In prose, the worst thing one can do with words is surrender to them… Probably it is better to put off using words as long as possible and get one’s meaning as clear as one can through pictures and sensations. Afterward one can choose — not simply accept — the phrases that will best cover the meaning… (George Orwell, Politics and the English Language)

Orwell then provides a list of simple rules to help in removing the “humbug and vagueness” from political language (such as: “Never use a long word where a short one will do”). He states that “one ought to recognize that the present political chaos is connected with the decay of language”, and that, “If you simplify your English, you are freed from the worst follies of [political] orthodoxy”.

What are the fallacies here? Well, most obvious is the notion that political propaganda can be resisted with language which simply fits the right words to true meanings, without concealing or dressing anything up. Anyone who has studied effective political propaganda will tell you that it already does precisely that. The most convincing, persuasive propaganda, rhetoric or political speech seems to be that which strikes the reader or listener as plain-speaking “truth”. In many ways, the right seems to have mastered this art.

The fallacy comes from the Enlightenment notion that because people are rational, you only need to tell them the “plain facts” for them to reason to the truth. We know, however, that facts are interpreted according to frames. Every fact, and every word, is understood in relation to a frame. To borrow an example from my previous article, you can state that “corporations are job creators”, and you can state that “corporations are unaccountable private tyrannies”. Two different frames, neither of which consists of “debasement” of language or factual deception. Rather, it’s a question of activating different worldviews.

Orwell’s notion of letting “the meaning choose the word” seems to imply that our “meanings” exist independently of the semantic grids and cognitive-conceptual systems in our brains. Again, this comes from the Enlightenment fallacy – that there’s a disembodied reason or “meaning” which is literal (or “truth”), and which we can fit the right words to, in order to convey literal truth. It seems more accurate to say that we need conceptual frames to make sense of anything – or, as the cognitive scientists tell us, we require frames, prototypes, metaphors, narratives and emotions to provide “meaning”.

A lot of political/media rhetoric does seem to conform to Orwell’s diagnosis, and its language can probably be clarified by his rules and recommendations. But it’s not this “vague”, “pretentious”, “deceptive” type of rhetoric or propaganda that worries me most. What worries me is the rightwing message-machine’s success (if we believe the polls/surveys) in communicating “plain truths” to millions by framing issues in ways which resonate with people’s fears and insecurities – and which tend to activate the more “intolerant”, or “strict-authoritarian” aspects of cognition, en masse.

Written by NewsFrames

April 24, 2014 at 8:40 am