Inductive Reasoning & Being Wrong
Feb 14, 2013 – For Descartes, error meant believing something based on insufficient evidence. St Augustine arrived at a similar notion 1,200 years earlier, but presumably rejected it due to theological implications (eg lack of evidence supporting the doctrine of how the serpent approached Eve).
Believing stuff based on meagre evidence is what people do. And as Kathryn Schulz notes, in Being Wrong, it’s not something that we do only occasionally – we do it all the time. As she puts it, “believing things based on paltry evidence is the engine that drives the entire miraculous machinery of human cognition”.
It seems understandable that our nervous systems function in this way. How much evidence do you need to show you that bumping into things hurts? It’s not in your best interests to go around bumping into everything just to accumulate a lot of evidence that it’s painful. Once or twice is enough.
On this “physical” level, human cognition isn’t about amassing “sufficient” evidence or looking for counterevidence – it’s about efficient ways to adapt/survive. This doesn’t normally include logic, scepticism, doubt, systematic experimentation, etc. And yet it works well for dealing with a large part of our ‘reality’ (including learning language – which we’ll come to).
So, our “default” cognitive operating system doesn’t resemble our idealised view of ourselves as reasonable people who weigh the “factual” evidence. And, anyway, as I’ve mentioned elsewhere at this blog, we tend not to think in “facts” or logical propositions – mostly we think in metaphorical frames, especially on areas more abstract or complex than, say, object A bumping into object B.
This brings us to “inductive reasoning” – the act of guessing based on past experience. Unlike formal-logic “deductive” reasoning, inductive thinking yields beliefs which are only, at best, probabilistically true (not necessarily true). To cite David Hume’s famous example: How can you be certain that all swans are white if you’ve only seen a tiny fraction of all the swans ever to exist? No matter how many white swans you see, you’ll only be adding to an accumulation of “evidence”, rather than deducing the necessary color of swans. So, inductions can never be proven in an absolute or necessary sense, but they can be “corroborated” (with evidence) to the effect that they’re regarded as more likely to be true than is the next best guess. They can be falsified, ie proven wrong, however – in this case with the discovery of black swans. This business of inductive corroboration & falsification forms a large part of idealised “scientific method” (in theory at least).
At this point, I think I’ll just quote some brief excerpts straight from Kathryn Schulz’s book (particularly from the chapter on “Evidence”), since she puts things so clearly and there’s no point in making pointless work for myself. (Schulz is particularly good on the chilling pitfalls of inductive reasoning – ‘confirmation bias’, stereotyping, etc):
“Psychologists and neuroscientists increasingly think that inductive reasoning undergirds virtually all of human cognition. You make best guesses based on your cumulative exposure to the evidence every day, both unconsciously and consciously.”
“This kind of guesswork is also how you learned almost everything you know about the world. Take language. Your parents didn’t teach you to talk by sitting you down and explaining that English is a subject-verb-object language, that most verbs are converted to the past tense by adding the suffix “-ed,”… and so forth. Mercifully for everyone involved, they didn’t have to. All they had to do was keep on chatting about how Mommy poured the milk and Laura painted a pretty picture, and you figured it out by yourself.”
“One reason the great linguist Noam Chomsky thought language learning must be innate is that the entire corpus of spoken language (never mind the subset spoken to children under four) doesn’t seem to contain enough evidence to learn all of grammar. He called this problem “the poverty of the stimulus.” In particular, he pointed out, children never hear examples of grammatical structures that aren’t permissible in their language, such as “Mommy milk poured” or “picture pretty painted Laura.” This raises the question of how kids know such structures aren’t permissible, since, in formal logic, never hearing such sentences wouldn’t mean that they don’t exist. [As logicians say, lack of evidence is not evidence of a lack.] But if we learn language inductively, the poverty of the stimulus might not be a problem after all. It’s a good bet that if you’ve been paying attention to language for four years and you’ve never heard a certain grammatical form before, you are never going to hear it. Inductively, lack of evidence actually is evidence of a lack.”
“However slapdash it might initially seem, this best-guess style of reasoning is critical to human intelligence. In fact, these days, inductive reasoning is the leading candidate for actually being human intelligence.”
“[L]eaping to conclusions is what we always do in inductive reasoning, but we generally only call it that when the process fails us – that is, when we leap to wrong conclusions. In those instances, our habit of relying on meager evidence, normally so clever, suddenly looks foolish. […] Since the whole point of inductive reasoning is to draw sweeping assumptions based on limited evidence, it is an outstanding machine for generating stereotypes. Think about the magnitude of the extrapolation involved in going from “This swan is white” to “All swans are white.” In context, it seems unproblematic, but now try this: “This Muslim is a terrorist” – “All Muslims are terrorists.” Suddenly, induction doesn’t seem so benign.”
“If the stereotypes we generate based on small amounts of evidence could be overturned by equally small amounts of counterevidence, this particular feature of inductive reasoning wouldn’t be terribly worrisome. A counterexample or two would give the lie to false and pernicious generalizations, and we would amend or reject our beliefs accordingly. But this is the paradox of inductive reasoning: although small amounts of evidence are sufficient to make us draw conclusions, they are seldom sufficient to make us revise them.”
“We don’t gather the maximum possible evidence in order to reach a conclusion; we reach the maximum possible conclusion based on the barest minimum of evidence. […] We don’t assess evidence neutrally; we assess it in light of whatever theories we’ve already formed on the basis of whatever other, earlier evidence we have encountered.”
“Sometimes, by contrast, we see the counterevidence just fine – but, thanks to confirmation bias, we decide that it has no bearing on the validity of our beliefs. In logic, this tendency is known, rather charmingly, as the No True Scotsman fallacy. Let’s say you believe that no Scotsman puts sugar in his porridge. I protest that my uncle, Angus McGregor of Glasgow, puts sugar in his porridge every day. “Aye,” you reply, “but no true Scotsman puts sugar in his porridge.” So much for my counterevidence – and so much the better for your belief. This is an evergreen rhetorical trick, especially in religion and politics. As everyone knows, no true Christian supports legalized abortion (or opposes it), no true follower of the Qur’an supports suicide bombings (or opposes them), no true Democrat supported the Iraq War (or opposed it)…et cetera.”
“The Iraq War also provides a nice example of another form of confirmation bias. At a point when conditions on the ground were plainly deteriorating, then-President George W. Bush argued otherwise by, in the words of the journalist George Packer, “interpreting increased violence in Iraq as a token of the enemy’s frustration with American success.” Sometimes, as Bush showed, we look straight at the counterevidence yet conclude that it supports our beliefs instead.”
“The final form of confirmation bias I want to introduce is by far the most pervasive – and, partly for that reason, by far the most troubling. On the face of it, though, it seems like the most benign, because it requires no active shenanigans on our part. […] Instead, this form of confirmation bias is entirely passive: we simply fail to look for any information that could contradict our beliefs.”
“You don’t need to be one of history’s greatest scientists to combat your inductive biases. Remembering to attend to counterevidence isn’t difficult; it is simply a habit of mind. But, like all habits of mind, it requires conscious cultivation. Without that, the first evidence we encounter will remain the last word on the truth. That’s why so many of our strongest beliefs are determined by mere accidents of fate: where we were born, what our parents believed, what other information shaped us from our earliest days. Once that initial evidence takes hold, we are off and running. No matter how skewed or scanty it may be, it will form the basis for all our future beliefs. Inductive reasoning guarantees as much.”