April 25, 2013
I wrote this in the 1990s (for a magazine). I’m resurrecting it here for two reasons – 1) a recent Guardian article (News is bad for you) makes similar points, and, 2) I’ve had my fill of “news” lately, and plan to practise what I preach here…
“Information anxiety” is caused by the “ever widening gap between what we understand and what we think we should understand”, according to Saul Wurman, who coined the term. But what makes us think we should understand any of it?
There are two common notions about “being informed”: i) it’s irresponsible not to be, and ii) it’s unsafe not to be. In other words, social consensus (which defines “irresponsible”) and basic survival anxieties (which define “unsafe”) lead to information anxiety – so perhaps it shouldn’t be underestimated as a social influence.
Most people probably feel Oprahfied to some extent – ie pressured to have opinions on everything the media defines as important. And they fear falling behind. (According to a report in the Guardian,1 nearly half the population have this fear).
This is partly due to “good marketing” – the advertisers’ and content-providers’ constant drip, drip of things you “should” know about is intended to induce anxiety, so you spend money to relieve it. (A major UK company’s marketing chief once admitted to me that his profession was concerned entirely with stimulating consumer fear and greed).2
As a selling strategy, “fear of being left out” has no limits when applied to media (entertainment/information-based) products. There’s a limit to how many cars you need, but there’s no limit to what you “should” know about.
The info-anxiety theory recommends that we find more effective ways to process information, so we can absorb more without being overwhelmed. A better approach, however, might be to simply filter out the 99.9% of information that serves no purpose for you.
How much “information” consists of people making noises to avoid listening to themselves think? Media presenters tend not to be quietly reflective. The over-representation of “loud” personalities on TV no doubt contributes to the increasingly accepted notion that “quiet introspection” is a mental illness – peaceful isolation from extroversion and media noise seems like a difficult commodity to find.
Fortunately, you don’t need a cave to escape to – you can take a holiday from info-noise without going anywhere, simply by changing a few parameters of your mental processes. This technique has existed in various forms for centuries – used by “eccentrics” who wanted to revive their faculty of thinking, as opposed to having people’s thoughts (ie reflection rather than verbal loops).
Side effects included improved imagination and weirder dreams. You might enjoy trying it:
→ For a set period (eg 1 or 2 weeks), completely avoid TV, newspapers, magazines, radio, browsing in newsagents, topical chatter, etc [2013 update: add online news & social media to the list]. This is done by refusing such stimuli any admittance to your mind.
Mass-media “information” largely consists of non-useful, vaguely entertaining distraction. Of the non-trivial, non-amusement content (eg some of “the news”), most concerns things you’re powerless to influence. (Conversely, the issues you might influence seem notably absent).
Why clutter your brain with things you can do nothing about? How can it be irresponsible or unsafe to ignore it, if (at best) it’s of no positive use to you, and (at worse) it damages your health?
2013 addition: The recent Guardian piece I mentioned makes pretty much the same points (plus several others). I recommend a good look at it. Here are a few quotes:
“Thinking requires concentration. Concentration requires uninterrupted time. News pieces are specifically engineered to interrupt you. They are like viruses that steal attention for their own purposes. News makes us shallow thinkers. But it’s worse than that. News severely affects memory.”
“Most news consumers – even if they used to be avid book readers – have lost the ability to absorb lengthy articles or books. After four, five pages they get tired, their concentration vanishes, they become restless. It’s not because they got older or their schedules became more onerous. It’s because the physical structure of their brains has changed.”
(‘News is bad for you’, Guardian, 12/4/13)
1. The Guardian, 22/10/96
2. M&SFS Head of Marketing, 1990
Graphics by NewsFrames
April 8, 2013 – JK Rowling should perhaps be given a Nobel Prize for getting a generation of kids to read books. As if that wasn’t enough, she’s generated endless amounts of tax revenue. How was this phenomenon nurtured? By a little time and space on the dole.
You’d be surprised how many successful people developed their craft on the dole. In a way, most successful corporations also require a long period on the dole. Do you think Boeing and Microsoft would have achieved commercial success without decades of state-funded research and development in aerospace and computing?
Any true wealth-generating activity requires periods of “social nurturing” which aren’t profitable. They’re not self-funding in the short term; they are dependent. (We realise this for children – we call it “education”. The money spent on it is regarded as social investment).
“Investment” (in human beings) was also one of the ideas – along with “safety net” – behind “social security”. The welfare state was created in the forties, in a post-war economy which was nowhere near as wealthy as now (imagine: computer technology didn’t exist).
But, for decades, the rightwing press, “free market” think-tanks, politicians and pundits (not just of the right) have wanted you to think differently about social security. They want you to think of “welfare” as an unnecessary nuisance which costs more than everything else combined.
To that end, a simple set of claims, accompanied by a certain type of framing, is relentlessly pushed into our brains by newspaper front pages and TV and internet screens. It has two main components:
- Vastly exaggerate the real cost of “welfare” and falsely portray it as “spiralling out of control” (how this is done is explained here and here). Misleadingly include things like pensions in the total cost when you’re talking about unemployment. (This partly explains why people believe unemployment accounts for 41% of the “welfare” bill, when it accounts for only 3% of the total).
- Appeal to the worst aspects of social psychology by repeatedly associating a stereotype (the “benefits scrounger/cheat”) with the concept of “welfare”. One doesn’t have to be a prison psychologist to understand how anger and frustration are channeled towards those perceived as lower in the pecking order: “the scum”. (According to a recent poll, people believe the welfare fraud rate is 27%, whereas the government estimates it as 0.7%).
It’s a potently malign cocktail. When imbibed repeatedly, there’s little defense against its effects. Even those who depend on benefits come to view benefits recipients in a harshly negative light (see Fern Brady’s article for examples). Those politicians who aren’t naturally aligned with rightwing ideology go on the defensive – they talk about “being tough” and “full employment“. It just reinforces the anti-welfare framing.
The strangely puritanical – and deeply irrational – obsession with “jobs”, ”hard-working families”, etc, at a time in history when greater leisure for all is more than a utopian promise (due to the maturation of labour-saving technology, etc) seems an integral part of the conservative framing – which is perhaps why many on the “left” find it difficult to provide counter-narratives.
But that would require another article. For now I’ll leave you with a short video explaining Basic Income – a fast-spreading idea which is highly relevant to the above. (Guardian columnist George Monbiot recently championed Basic Income as a “big idea” to unite the left).
Feb 14, 2013 – For Descartes, error meant believing something based on insufficient evidence. St Augustine arrived at a similar notion 1,200 years earlier, but presumably rejected it due to theological implications (eg lack of evidence supporting the doctrine of how the serpent approached Eve).
Believing stuff based on meagre evidence is what people do. And as Kathryn Schulz notes, in Being Wrong, it’s not something that we do only occasionally – we do it all the time. As she puts it, “believing things based on paltry evidence is the engine that drives the entire miraculous machinery of human cognition”.
It seems understandable that our nervous systems function in this way. How much evidence do you need to show you that bumping into things hurts? It’s not in your best interests to go around bumping into everything just to accumulate a lot of evidence that it’s painful. Once or twice is enough.
On this “physical” level, human cognition isn’t about amassing “sufficient” evidence or looking for counterevidence – it’s about efficient ways to adapt/survive. This doesn’t normally include logic, scepticism, doubt, systematic experimentation, etc. And yet it works well for dealing with a large part of our ‘reality’ (including learning language – which we’ll come to).
So, our “default” cognitive operating system doesn’t resemble our idealised view of ourselves as reasonable people who weigh the “factual” evidence. And, anyway, as I’ve mentioned elsewhere at this blog, we tend not to think in “facts” or logical propositions – mostly we think in metaphorical frames, especially on areas more abstract or complex than, say, object A bumping into object B.
This brings us to “inductive reasoning” – the act of guessing based on past experience. Unlike formal-logic “deductive” reasoning, inductive thinking yields beliefs which are only probabilistically true (not necessarily true). To cite David Hume’s famous example: How can you be certain that all swans are white if you’ve only seen a tiny fraction of all the swans ever to exist? No matter how many white swans you see, you’ll only be adding to an accumulation of evidence, rather than deducing the necessary color of swans. So, inductions can never be proven in an absolute or necessary sense, but they can be corroborated (with evidence) to the effect that they’re regarded as more likely to be true than is the next best guess. They can be falsified, ie proven wrong, however – in this case with the discovery of black swans. This business of inductive corroboration & falsification forms a large part of “scientific method” (in theory at least).
At this point, I think I’ll just quote some brief excerpts straight from Kathryn Schulz’s book (particularly from the chapter on “Evidence”), since she puts things so clearly and there’s no point in making pointless work for myself. (Schulz is particularly good on the chilling pitfalls of inductive reasoning – ‘confirmation bias’, stereotyping, etc):
“Psychologists and neuroscientists increasingly think that inductive reasoning undergirds virtually all of human cognition. You make best guesses based on your cumulative exposure to the evidence every day, both unconsciously and consciously.”
“This kind of guesswork is also how you learned almost everything you know about the world. Take language. Your parents didn’t teach you to talk by sitting you down and explaining that English is a subject-verb-object language, that most verbs are converted to the past tense by adding the suffix “-ed,”… and so forth. Mercifully for everyone involved, they didn’t have to. All they had to do was keep on chatting about how Mommy poured the milk and Laura painted a pretty picture, and you figured it out by yourself.”
“One reason the great linguist Noam Chomsky thought language learning must be innate is that the entire corpus of spoken language (never mind the subset spoken to children under four) doesn’t seem to contain enough evidence to learn all of grammar. He called this problem “the poverty of the stimulus.” In particular, he pointed out, children never hear examples of grammatical structures that aren’t permissible in their language, such as “Mommy milk poured” or “picture pretty painted Laura.” This raises the question of how kids know such structures aren’t permissible, since, in formal logic, never hearing such sentences wouldn’t mean that they don’t exist. [As logicians say, lack of evidence is not evidence of a lack.] But if we learn language inductively, the poverty of the stimulus might not be a problem after all. It’s a good bet that if you’ve been paying attention to language for four years and you’ve never heard a certain grammatical form before, you are never going to hear it. Inductively, lack of evidence actually is evidence of a lack.”
“However slapdash it might initially seem, this best-guess style of reasoning is critical to human intelligence. In fact, these days, inductive reasoning is the leading candidate for actually being human intelligence.”
“[L]eaping to conclusions is what we always do in inductive reasoning, but we generally only call it that when the process fails us – that is, when we leap to wrong conclusions. In those instances, our habit of relying on meager evidence, normally so clever, suddenly looks foolish. [...] Since the whole point of inductive reasoning is to draw sweeping assumptions based on limited evidence, it is an outstanding machine for generating stereotypes. Think about the magnitude of the extrapolation involved in going from “This swan is white” to “All swans are white.” In context, it seems unproblematic, but now try this: “This Muslim is a terrorist” – “All Muslims are terrorists.” Suddenly, induction doesn’t seem so benign.”
“If the stereotypes we generate based on small amounts of evidence could be overturned by equally small amounts of counterevidence, this particular feature of inductive reasoning wouldn’t be terribly worrisome. A counterexample or two would give the lie to false and pernicious generalizations, and we would amend or reject our beliefs accordingly. But this is the paradox of inductive reasoning: although small amounts of evidence are sufficient to make us draw conclusions, they are seldom sufficient to make us revise them.”
“We don’t gather the maximum possible evidence in order to reach a conclusion; we reach the maximum possible conclusion based on the barest minimum of evidence. [...] We don’t assess evidence neutrally; we assess it in light of whatever theories we’ve already formed on the basis of whatever other, earlier evidence we have encountered.”
“Sometimes, by contrast, we see the counterevidence just fine – but, thanks to confirmation bias, we decide that it has no bearing on the validity of our beliefs. In logic, this tendency is known, rather charmingly, as the No True Scotsman fallacy. Let’s say you believe that no Scotsman puts sugar in his porridge. I protest that my uncle, Angus McGregor of Glasgow, puts sugar in his porridge every day. “Aye,” you reply, “but no true Scotsman puts sugar in his porridge.” So much for my counterevidence – and so much the better for your belief. This is an evergreen rhetorical trick, especially in religion and politics. As everyone knows, no true Christian supports legalized abortion (or opposes it), no true follower of the Qur’an supports suicide bombings (or opposes them), no true Democrat supported the Iraq War (or opposed it)…et cetera.”
“The Iraq War also provides a nice example of another form of confirmation bias. At a point when conditions on the ground were plainly deteriorating, then-President George W. Bush argued otherwise by, in the words of the journalist George Packer, “interpreting increased violence in Iraq as a token of the enemy’s frustration with American success.” Sometimes, as Bush showed, we look straight at the counterevidence yet conclude that it supports our beliefs instead.”
“The final form of confirmation bias I want to introduce is by far the most pervasive – and, partly for that reason, by far the most troubling. On the face of it, though, it seems like the most benign, because it requires no active shenanigans on our part. [...] Instead, this form of confirmation bias is entirely passive: we simply fail to look for any information that could contradict our beliefs.”
“You don’t need to be one of history’s greatest scientists to combat your inductive biases. Remembering to attend to counterevidence isn’t difficult; it is simply a habit of mind. But, like all habits of mind, it requires conscious cultivation. Without that, the first evidence we encounter will remain the last word on the truth. That’s why so many of our strongest beliefs are determined by mere accidents of fate: where we were born, what our parents believed, what other information shaped us from our earliest days. Once that initial evidence takes hold, we are off and running. No matter how skewed or scanty it may be, it will form the basis for all our future beliefs. Inductive reasoning guarantees as much.”