N E W S • F R A M E S • • • • •

About media framing • (written by Brian Dean)

Archive for the ‘Social media algorithms’ Category

New book on framing (2023)

My new 210-page book on media framing has been published by Futura Press. It’s a greatly extended and updated version of my earlier (2014) 68-page eBooklet. It’s available in paperback and eBook editions.

I’m thrilled with the quality of the print job – everything about the book has been upgraded, including the cover and internal illustrations.

Lazy Person’s Guide to Framing

It has new sections covering social media framing and algorithm-boosted political populism, etc, and has been generally overhauled and improved. It has a far more coherent and satisfying feel for me than the original edition.

I’ve just written quite a long post describing and promoting the book to followers of my other blog (RAW semantics), so I won’t re-invent the wheel here. That post is titled, RAW maps, models… frames?, and you may find it of interest.

Amazon’s “look inside” feature for the book provides a lot of preview details, from the cover and review/blurb page, to the table of contents, the new 2023 foreword and excerpts from the first few chapters. Please check it out if you’re curious…

US Amazon link for Lazy Person’s Guide to Framing
UK Amazon link for Lazy Person’s Guide to Framing
(Available in paperback and eBook)

Written by NewsFrames

February 16, 2023 at 11:31 am

Algorithm politics & framing

– 7 June 2022

The biggest and most politically influential algorithm-based tech platforms (Facebook, Google, Twitter, Youtube, etc) have implemented measures designed to reduce things such as hate speech, “fake news”, bot-driven influence campaigns, etc – often at the expense of much criticism, since these measures, in many cases, seem heavy-handed and counterproductive. Some prominent critics argue that it amounts to “censorship”, citing, for example, the banning of Donald Trump from Twitter.

The framing of this debate quite often seems conceptually backward to me – with the now-prominent issues of “free speech” and “censorship” framed in ways still suggestive of legacy media gatekeeping “centres” (eg of publishing and broadcasting) – even though the debates concern mostly “decentralised” online media with algorithm gatekeeping – often featuring “censored” celebrities with access to multiple alternative platforms. Perhaps we have less of a “free speech problem”, and more of a “swamped by noise and disinformation problem”?

The viral spread of socially destructive content (the aforementioned hate speech, engineered fakery, botswarming, etc) has been blamed, by some, on the favoured business models of the big platforms. See, for example, my previous post describing Jaron Lanier’s critique of how these business models produce certain directives for the algorithms, which in turn blindly amplify the very worst aspects of humanity. Lanier argues that humanity will not survive the destructive social and political transformations being wrought – making replacement of the now-dominant B.U.M.M.E.R. business model as urgent an issue as climate breakdown).

But the owners of the big platforms love this business model because of its colossal profits – among other things. And so the main problem (according to Lanier and many others – eg see The Social Dilemma), ie the business model itself, isn’t tackled. Instead we get these stop-gap measures – bannings, suspensions, etc – which, to many people, look like clumsy, iron-booted, politically-biased “censorship”. The irony, for me, is that, in most cases, the owners of these platforms seemed motivated by libertarian notions of business. As Lanier wrote, “there was a libertarian wind blowing… We figured it would be wiser to let entrepreneurs fill in the blanks than to leave that task to government”.

‘At YouTube, I was working on YouTube recommendations. It worries me that an algorithm that I worked on is actually increasing polarization in society… The flat-Earth conspiracy theory was recommended hundreds of millions of times by the algorithm. It’s easy to think that it’s just a few stupid people who get convinced, but the algorithm is getting smarter and smarter every day.’

Guillaume Chaslot, The Social Dilemma

A.I. algorithms & McLuhan’s tetrad

Amid the recent noise about Elon Musk was his announced intention to make Twitter algorithms “open source” (ie available to public scrutiny, critique and improvement). If true, that seems pretty “huge” (the big media companies’ algorithms are apparently among the most tightly guarded secrets on the planet).

But Musk’s description of how this algorithm transparency would work sounds very much like the process of editing Wikipedia pages. I hope Elon reads Stephen Wolfram’s testimony to Congress on the subject, as Wolfram explains that what Musk proposes can effectively be considered “impossible”, due to the nature of current machine-learning systems: “For a well-optimized computation, there’s not likely to be a human-understandable narrative about how it works inside”. Wolfram proposes a different kind of solution to problems inherent with “monolithic AI” platforms: “Third Party Ranking Providers” and “Third Party Constraint Providers”.

Amid my own mental noise on the “negative” effects of those A.I. platforms (political chaos, nudged states of mind, etc) appears the notion that I should rethink more “positively” and “globally”. Or at least try to see different aspects of media evolution from various different perspectives – not just the ones which appear to have socially destructive trajectories.

Recall David Lynch’s scathing assessment of the new mobile media: “It’s such a sadness that you think you’ve seen a film on your fucking telephone. Get real”. But also note the simultaneously emerging “New Golden Age of TV” – an aspect of the same media evolution (eg streaming, on-demand, binge-watching). Those “critically acclaimed box-sets” – the quality and the depth of engagement seem off-the-scale.

Lynch’s own co-creation, Twin Peaks, heralded this Golden Age (David Chase, the creator of The Sopranos, cited Lynch as major influence/inspiration). I picture – probably inaccurately – reactionary TV execs (circa 1990), when faced with the success of Twin Peaks, thinking, “Do people really go for this ambiguous, depraved weirdo liberal bullshit? I thought they liked sensible stuff like Fox News!”

Meanwhile, what happened to Fox News? As a UK resident, I don’t see it on TV – I just see clips on social media – mostly of folks like Tulsi Gabbard, Glenn Greenwald and Michael Tracey guesting on Tucker Carlson’s show. It looks like the only “mainstream” TV in the western world in which Putin gets consistently better publicity than the US president. The latest clips I saw were of Carlson presenting a Fox show called ‘The End of Men’, in which he discusses “testicle tanning”. It almost makes Twin Peaks look mundane.

Some media mutations appear visible and obvious – eg from radio to television. Others not so much – especially more recent transformations. A.I. algorithm-driven mobile apps, and their dominant business models, can be considered something “other” than “the internet” – in many ways replacing original conceptions of “the web” (ie web pages on browsers running on desktops or laptops). The medium is the message, and if media mutation follows the rate of technological advance, how do we better understand the effects, social, political and otherwise, soon enough?

‘Photoshop didn’t have 1,000 engineers on the other side of the screen, using notifications, using your friends, using AI to predict what’s gonna perfectly addict you, or hook you, or manipulate you, or allow advertisers to test 60,000 variations of text or colors to figure out what’s the perfect manipulation of your mind. This is a totally new species of power and influence.’

Tristan Harris, The Social Dilemma

For one thing, the role of “user”/”audience”, ie YOU, has mutated – no longer the customer, more the raw material forming the product – but that seems one of the more obvious changes. Do we need improved ways of seeing to apprehend the changes in question? The old constructs for apprehending “let us down” – unless we first recognise them as such (Ye Olde Metaphorical Constructions), and then perhaps re-perceive as kitsch or art. (Or, in Steve Bannon’s case, as networked political warfare – see below).

Marshall McLuhan’s tetrad seems a good starting point, as it yields a more “meta-” view of media, among other things. For an insightful guide to the tetrad and the current relevance of McLuhan (which also has a lot of fun, up-to-date examples), I recommend Paul Levinson’s ‘McLuhan in an Age of Social Media’ – a self-contained update to Paul’s ‘Digital McLuhan’.

‘The tetrad, in a nutshell, is a way of mapping the impact and interconnections of technologies across time. It asks four questions of every medium or technology: What does it enhance or amplify? What does it obsolesce or push out of the sunlight and into the shade. What does it retrieve or bring back into central attention and focus – something which itself had previously been obsolesced. And what does the new medium, when pushed to its limits, reverse or flip into?’

Paul Levinson, ‘McLuhan in an Age of Social Media’

Steve Bannon’s project & the tetrad

We perhaps forget that as well as being White House strategist and Trump’s advisor, Bannon helped run Breitbart News and Cambridge Analytica, and has spent his time networking his “far-right” political cause with a wide array of global influencers (eg Nigel Farage and George Galloway, to give two examples in the UK). Bannon made a fortune from investing fairly early in the successful US comedy show, Seinfeld. And as The Guardian put it, “Bannon’s wealth smoothed his path from finance to media and politics”.

To speculate on Bannon’s activities in terms of McLuhan’s tetrad, I refer to what I’ve previously documented – that Bannon adopted some old “leftwing” tropes, which he sheared of specifics (making them “cooler” in McLuhan’s terms), for appealing to a younger audience. He regarded Fox’s audience at the time as “geriatric”. Bannon had studied the output of people such as Michael Moore to see what “worked”, and he’d recognised the power that could be wielded by the huge online communities of alienated young people (audiences of sites such as Breitbart).

To quote Devil’s Bargain (by Joshua Green), Bannon “envisioned a great fusion between the masses of alienated gamers, so powerful in the online world, and the right-wing outsiders drawn to Breitbart by its radical politics and fuck-you attitude”.

Using McLuhan’s four-part tetrad “probe”, we can consider what Bannon’s project Enhances, Obsolesces, Retrieves and Reverses, in political-media terms. One obvious retrieval is what I describe above – Bannon’s project retrieved old ‘left’ tropes – binary political frames/categories, such as:

  • Anti-establishment vs Establishment
  • Ordinary folk vs Elite
  • Outsiders vs Corporate Media
  • Unjustly maligned “official enemies” vs Malign US Deep State

Tied to their original, left-ideological “hot” specificity, these tropes might seem inadequate for making sense of the fast-moving fractal-like chaos and complexity of 21st century political culture. But Bannon et al, I think, realised their “cool” effectiveness when used in non-specific populist expression – the kind tweeted by Trump, for example. (Prof Levinson writes that this non-specificity in Trump’s tweets – inviting people to interact and “fill in the gaps” – makes Trump’s communication “cool” in McLuhan’s jargon).

In terms of the tetrad, this enhances the revolutionary fervour of, say, anti-establishment protests (or, alternatively, you can see it as enhancing the angry rabble-rousing of demagogues). It obsolesces the “geriatric” aspects of the conservative right that Bannon saw as an impediment. It reverses certain traditional conservative moral associations with conventional “authority”, which perhaps flips into adoration of “strong” “maverick” types. (Frank Luntz has also worked on this reversal – with his advice to conservatives to always blame everything on “Washington” “D.C.” “establishment” authority).

More tetrad speculation: much (but not all) of the above seemed, for Bannon, about getting a younger online demographic into his “alt-right” vision. It also appears to enhance sweeping generalisations and either/or thinking – due to the binary nature of the original tropes, now shorn of specifics, and presented in “cool” (but ironically demagogic) soundbites. Social media algorithms, designed to maximise engagement, appear to promote content with the type of characteristics that happen to be enhanced by Bannon’s media strategy.

Sub-optimal political framing

Old-school (pre-social media) framing of “censorship”, “silencing”, “suppression” (of free speech), etc, seemed prominent in anti-war critiques of media coverage of the 2003 Iraq war – with good reason. But this framing doesn’t work so well when applied to newer “horizontal” networked media structures. War coverage has mutated significantly as a result of social media. In the current conflict in Ukraine, every Russian tank, truck and troop movement gets tracked and added to open source databases, through smartphones, etc – a country of 44 million people recording every move of the invaders, with real-time data (source: Guardian article).

When the framing used in anti-war critiques of earlier (eg 2003) media gets re-applied to the current Ukraine conflict – as if the lessons of earlier media failures can be re-applied without much modification – we end up with strange distortions and misperceptions. For a case in point, take this article by Matt Taibbi. It begins with the intelligent point that Russia’s invasion of Ukraine has presented us with a terrible dilemma but then, halfway through, Taibbi performs a curious mis-framing with “war critic”, “anti-war” and “anti-interventionist” labels, regarding media coverage:-

‘Before “de-platforming” was even a term in the American consciousness, our corporate press perfected it with regard to war critics… [Matt then gives detail on exclusion of “Anti-war voices” from 2003 Iraq War coverage]
‘Since then, we’ve only widened the rhetorical no-fly zone. In a development that back then I would have bet my life on never happening, anti-interventionist voices or advocates for such people are increasingly confined to Fox if they appear on major corporate media at all.’Matt Taibbi, America’s Intellectual No-Fly Zone

That seems weird to me, as it implies that corporate media opposition to, and criticism of, the major war under discussion (Russia’s military intervention in Ukraine) exists practically nowhere but on Fox News! Taibbi’s comments make logical sense to me only if I assume at least one of the following:

  1. I’m hallucinating the wall-to-wall media opposition to a major aggressive war currently being waged.
  2. Matt Taibbi doesn’t see Russia invading Ukraine – he sees some other war that isn’t being opposed/criticised in current media coverage.
  3. The terms “war critic”, “anti-interventionist” and “anti-war” have a special, qualified meaning for Taibbi, which he doesn’t specify.

(Actually, I see the “real” problem here as something Nassim Taleb has alluded to – outdated media models tied into a sort of logical fallacy. Incidently, Taleb, not so long ago, supported people like Glenn Greenwald, Tulsi Gabbard, et al, but has recently taken to denouncing them on Twitter – he’s repeatedly called Greenwald and Edward Snowden “frauds”, and his criticism along these lines extends to folks like Caitlin Johnstone and even Elon Musk. “Fraud” seems an over-the-top allegation to me – I prefer to think of these folks as using various outdated top-down constructs for media, “censorship”, “surveillance”, etc, while simultaneously using sometimes-valid notions of political corruption in “liberal” “establishments” – the latter to populist appeal; the former confusing the media issues. I’m trying to be charitable and diplomatic here!).

Orwell retrieved & obsolesced

Orwell quotes (often in the form of memes) currently seem a popular way to frame 21st century political and media scenarios. The above-mentioned Matt Taibbi piece uses Orwell quotes in this way, and cites Chomsky as “often” using them in a similar way. But the language here seems curiously anachronistic when you consider what Taibbi and Chomsky refer to.

Taibbi asks Chomsky about the negative responses on social media to Chomsky’s recent remarks on Russia/Ukraine. The MIT professor replies that it’s normal for “doctrinal managers” to condemn people who “don’t keep rigidly to the Party Line”. Taibbi cites Orwell’s view that “free societies suppress thought almost as effectively as the totalitarian Soviets”, and quotes Orwell saying certain inconvenient views are not “entitled to a hearing”.

I’ve looked at a lot of the negative responses to Chomsky’s Ukraine remarks – including the ones that Taibbi links to. I don’t see “doctrinal managers” or a “Party Line”. I see a lot of individuals on social media posting various (quite diverse) criticisms of Chomsky’s remarks. I see neither “suppression” of thought, nor any speech denied its “entitlement” to “a hearing”. (A typical example of the recent harsh critiques of Chomsky is this Twitter thread, which was retweeted by the journalist George Monbiot).

Orwell’s Animal Farm was published in 1945. His views on “suppression” of thought and speech reflect the media forms of the time. Similarly, much of the language Chomsky uses on political media dates back to his Manufacturing Consent (1988) – effectively pre-internet.

Nassim Taleb commented recently on the Orwell meme pictured (above right). He wrote: “exactly 100% backwards”, adding:

‘In 1984, there was no web; governments had total control of information. In 2022 things are more transparent, so we see imperfections. THE TRANSPARENCY EFFECT: the more things improve the worse they look.’

– Nassim Taleb, on Twitter

“Surveillance” frame & the new hypocognition

“Surveillance”, like “censorship”, tends to get framed in a way that implies vertical power hierarchies. And while still obviously valid for human political institutions, this framing seems inadequate for the new and increasingly dominant algorithmic, machine-learning, “decentralised” media technology. To the extent we continue to use established (and thus comfortable) but anachronistic (for media) frames, we miss the significance of newer, mutated “interventions” that operate on more “horizontally” structured media – continuous, dynamic (minimum 2-way) demographically-optimised micro-interaction data-mining/profiling and algorithmic behavioural nudging, using sophisticated machine-learning systems on mobile biometric supercomputers (aka smartphones).

I’m pretending to understand it by lining up a lot of words. The point for me is that hardly anyone seems to understand the new algorithm-media interventions and their social/political effects. It seems to be a problem of what the cognitive linguists call “hypocognition” – we just don’t have adequate semantic frames, or visual imagery, to comprehend and discuss it properly, “as a society”, yet.

To illustrate this point using the frame of “surveillance”, recall what happened when Edward Snowden’s NSA surveillance leaks hit the press in 2014. It made huge news and was widely discussed. Most people already had the cognitive frames available to understand that kind of surveillance – top-down government surveillance. Those frames have been around a long time and already seemed an established part of popular culture – Orwell’s Big Brother, brought up to date by TV shows such as ’24’, which showed government spying on their own citizens in “real time” using incredible technology.

That much we understand. Now try to picture what Cambridge Analytica did. Try to describe it in a way you might discuss with your friends or family. A kind of surveillance, a kind of political/social “influence”, using social media – but not in the easily comprehended way of what Snowden revealed (which most us probably already suspected and had mental imagery and verbal frames for).

Of course, the fact that we have difficulty understanding and discussing it doesn’t mean it’s going away. And I imagine its funders (and various influential others) have noted some of its “successes” in nudging politics, and various social phenomena statistically.

‘It’s the gradual, slight, imperceptible change in your own behavior and perception that is the product… That’s the only thing there is for them to make money from. Changing what you do, how you think, who you are. It’s a gradual change. It’s slight. If you can go to somebody and you say, “Give me $10 million, and I will change the world one percent in the direction you want it to change…” It’s the world! That can be incredible, and that’s worth a lot of money.’

Jaron Lanier, The Social Dilemma


(Incidentally, my bank recently notified me that they’re “improving security” by introducing “behavioural biometric” checks for online payments: “We’re not actually checking your email address; it’s how you enter it that matters, including your keystrokes. It’s known as ‘behavioural biometric’ data and it should be unique to you.”).

Written by NewsFrames

June 7, 2022 at 4:09 pm

Behaviour modification empires for rent

Of all the “what the hell is going on?” type books that I’ve read in the last few years, the one I enjoyed most was Jaron Lanier’s Ten Arguments For Deleting Your Social Media Accounts.

The title undersells this book’s importance, to my mind. After all, it’s neither self-help nor “clickbait” – it’s not like “10 arguments for quitting sugar”. I regard it more as an absolutely essential collection of insights (from a Silicon Valley insider) about why basic democratic and progressive norms seem to be undermined as a consequence of how social media works.

“But for the moment we face a terrifying, sudden crisis…
Something is drawing young people away from democracy.”
(Jaron Lanier, Ten Arguments…)

Algorithm Politics & mass manipulation of humans

“The short-term, dopamine-driven feedback loops we’ve created are destroying how society works.”
Former Facebook vice president of user growth (quoted by Lanier)

Jaron’s book argues that while we should generally embrace the internet, we need to urgently reject what he calls “BUMMER” (his acronym for the destructive core of social media, short for “Behaviors of Users Modified and Made into an Empire for Rent”).

BUMMER is a sort of high-level business plan in which the end-users of social media are the product, not the customer (that’s why social media is free to use). The real customers are those who want to modify your behaviour in some way. The basic argument is that, statistically, social media algorithms boost certain negative aspects of human communication, since that’s what maximises engagement with the platform (thus maximising profit for the social media companies).

The algorithms don’t care how they maximise user engagement – it happens automatically (continually “optimised”), and it just so happens that tribalism and nasty adversarial conflicts tend to engage people more efficiently than, say, pleasantly reasonable discourse does. Nor do the algorithms care if the result is user addiction (with its related mental health problems).

“Social media is biased, not to the Left or the Right, but downward. The relative ease of using negative emotions for the purposes of addiction and manipulation makes it relatively easier to achieve undignified results. An unfortunate combination of biology and math favors degradation of the human world. Information warfare units sway elections, hate groups recruit, and nihilists get amazing bang for the buck when they try to bring society down.

“The unplanned nature of the transformation from advertising to direct behavior modification caused an explosive amplification of negativity in human affairs.” (Lanier, Ten Arguments…)

As the book frames it: “Social media is turning you into an asshole”. I’m reminded of the quote provided by Robert Anton Wilson at the beginning of his chapter on “The SNAFU principle” in Prometheus Rising:

“…the peculiar nature of the game…makes it impossible for [participants] to stop the game once it is under way. Such situations we label games without end.” (Watzlawick, Beavin, Jackson, Pragmatics of Human Communication – full quote here)

As for those who want to modify your behaviour, they range from advertisers to malign (and often secretive) parties seeking to amplify hatreds or swing elections. (Lanier doesn’t shy away from tackling emotive/controversial topics, such as Russian state exploitation of social media for disruptive purposes).

“Remember how it became cool in some liberal circles to cruelly ridicule Hillary, as if doing so were a religion? In the age of BUMMER you can’t tell what was organic and what was engineered.

“It’s random that BUMMER favored the Republicans over the Democrats in U.S. politics, but it isn’t random that BUMMER favored the most irritable, authoritarian, paranoid, and tribal Republicans. All those qualities are equally available on the left.” (Lanier, Ten Arguments…)

(Remember when Facebook promoted the “trending news” that “most doctors polled” had “serious concerns” about Hillary Clinton’s health, including the suggestion, in a poll question, that Hillary was a “flaming psychopath”? This “news” originally came from a rightwing group, AAPS, that promoted conspiracy theories, including that “vaccines cause autism“. It was also promoted by Trump and Wikileaks).

I found Lanier’s book to be an entertaining read, rich in insights (and in things you need to know about) – I recommend you read the whole thing for yourself. The bottom line is that the algorithms constantly monitor, via our online responses, preferences, framing, etc, the micro-level views/behaviours (you could call it the result of our “adaptive” unconsciouses) of hundreds of millions of people on an individual, targeted level (via their personalised social media feeds and searches), instantaneously in real time – modifying behaviour (so Lanier argues), in ways we’re not conscious of, and at the whim of parties who don’t have our best interests in mind.

Those algorithms? Lanier remarks that they’re among the best kept secrets on the planet – more carefully guarded than NSA or CIA state secrets. It’s worth quoting at length one example of how the book describes them as working:

“Black activists and sympathizers were carefully cataloged and studied. What wording got them excited? What annoyed them? What little things, stories, videos, anything, kept them glued to BUMMER? What would snowflake-ify them enough to isolate them, bit by bit, from the rest of society? What made them shift to be more targetable by behavior modification messages over time? The purpose was not to repress the movement but to earn money. The process was automatic, routine, sterile, and ruthless.

“Meanwhile, automatically, black activism was tested for its ability to preoccupy, annoy, even transfix other populations, who themselves were then automatically cataloged, prodded, and studied. A slice of latent white supremacists and racists who had previously not been well identified, connected, or empowered was blindly, mechanically discovered and cultivated, initially only for automatic, unknowing commercial gain – but that would have been impossible without first cultivating a slice of BUMMER black activism and algorithmically figuring out how to frame it as a provocation.

“BUMMER was gradually separating people into bins and promoting assholes by its nature, before Russians or any other client showed up to take advantage. When the Russians did show up, they benefited from a user interface designed to help ‘advertisers’ target populations with tested messages to gain attention. All the Russian agents had to do was pay BUMMER for what came to BUMMER naturally.” (Jaron Lanier, Ten Arguments…)

Update: I recommend watching The Great Hack (a new Netflix documentary), as it makes some of the same points that Lanier does about the urgency of the situation. It covers the threat to democracy posed by the new kind of “weapons grade” psychological propaganda that’s researched (and used) by entities such as Cambridge Analytica and SCL, using social media data mining, etc.

By the way, I’m aware that descriptions of this material (including my own, probably) sometimes sound a bit like paranoid sci-fi melodrama. Even the more sober reports often add to that effect. Read about the interventions of SCL Group (Cambridge Analytica’s parent company) in the 2010 elections in Trinidad and Tobago, for instance.

Written by NewsFrames

November 15, 2019 at 2:05 pm