Search This Blog

Wednesday 19 January 2022

Review: Neil Levy Bad Beliefs. Why They Happen to Good People

 





This is a very interesting, very readable book not least because the author, though presenting himself as a professional philosopher, draws on a range of theories and data culled from contemporary cognitive and behavioural sciences. His leading idea is that we live in an epistemic [knowledge] environment which is now heavily polluted by poor research, fake news, conspiracy theories, and so on, but where understandable concerns about free speech and poorly thought through notions of “balance” have so far prevented any serious clean up. This is important because self-help strategies alone cannot rescue individuals from forming false beliefs and making unfounded claims to knowledge. Indeed, it is the very fact that we are rational animals deploying our efforts in scarce real time that often enough leads us to false beliefs. We are reliant on those around us and society’s institutions for our sense of what is true and what is false and they ain’t making it easy. Reform is needed.

These problems are not as contemporary as Levy presents them; there is a long back story of concern about the power of charlatans, demagogues, and revolutionaries to lead even the thoughtful astray. That concern, in a recognisably modern form, dates at least from the eighteenth century. For example, when the charlatan Anton Mesmer started to make a big hit in pre-1789 France, the leaders of institutionalised French science and philosophy sought to articulate ways in which an ordinary non-expert person might see that he was a charlatan even without detailed understanding of his theories, such as they were[1].  True, in retrospect Mesmer clearly had some prescient understanding of aspects of the human mind which were then little understood. But this was incidental: Mesmer was running a (literal) theatre for the sake of fame and money. Something similar might be said of Jacques Lacan in the latter part of his life[2].

It would be easy to suppose that the institutional leaders were only concerned to keep outsiders outside but, as it happens, the most intellectually powerful figure among them - the Marquis de Condorcet - was not only a democrat who believed in a wide franchise (which would include women) but was responsible for a 1785 work in the theory of probabilities which showed that there is, as far as knowledge is concerned, strength in numbers: if certain conditions are met, it is rational to defer to majority opinion because the majority is more likely to be right than wrong and the bigger the absolute gap between majority and minority, the greater the reliability. Condorcet and his followers took  juries as a prime location for application of his probability work for it leads to the  conclusion that - conditions met -  larger juries are better than smaller ones and a requirement of unanimity or near unanimity much, much better than simple majority vote. What Condorcet could not anticipate was the TV programme Who Wants to Be A Millionaire which demonstrates the truth of his claims in near-perfect form.

Condorcet showed that majority voting is a good guide to truth:


(1) the more enlightened (knowledgeable) is each individual voter, with a minimum requirement that they be more likely to be right than wrong on any one occasion (p = greater than 0.5)

(2) provided that when voting, voters are trying to give the right answer
(3) and provided that they vote independently of each other - if one voter follows the lead of another, that simply reduces the effective number of voters

If these conditions are met, then in a majority vote the probability of the majority being right increases (and quite dramatically, heading towards p = 1 [certainty] ) the larger the absolute vote gap between majority and minority.

In the TV quiz show Who Wants to be a Millionaire? contestants have the right to ask the large audience for help with a question to which they don’t know the answer. Audience members are then offered four possible answers to the question and asked to select the right answer from among them. It’s rational for the contestant to confirm the audience’s majority answer as their own - and for these reasons:

(1) the Audience is likely to be quite knowledgeable. Quiz show live audiences are likely to contain a high proportion of people good at quizzes, so p is likely to be greater than 0.5.
(2) members of the audience have no motive to give answers they believe to be untrue (they enjoy giving right answers!)
(3) they vote independently of each other using push-button consoles with little or no time to consult the person sitting next to them and explicit instructions not to.

Hey Presto, the audience's choice of right answer will, almost certainly, BE the right answer. If some researcher checked back over Ask the Audience choices, I think they would rarely find that the Audience got it wrong. Ask the Audience is a No Brainer if you don't know the answer yourself[3].

In nineteenth century England, progressive thinkers who wanted to see a more democratic society with a wider franchise always met with objections which insisted that the masses were too ignorant and too liable to be seduced by charlatans for it to be safe to entrust them with the vote. To this, progressive thinkers - most obviously the Utilitarians - responded with proposals for extending education to all, but also with a defence of the desirability of deference  to scientific authority. Deference is a key term in Levy’s book and his criteria for identifying those people and theories worthy of deference map very closely onto criteria proposed by Utilitarian thinkers. Thus Levy writes,

“A scientific consensus is reliable when it has been stress-tested, by all the disciplines relevant to the topic, for an extended period of time” (page 109)

Compare this passage from a book published in England in 1849:

“With respect to the subjects of speculation and science, the existence of an agreement of the persons having the above qualifications [natural ability, long study, personal honesty] is the most important matter. If all the able and honest men who have diligently studied the subject, or most of them, concur, and if this consent extends over several successive generations, at an enlightened period, and in all or most civilised countries, then the authority is at its greatest” (page 42) as it is in astronomy (page 43).

This passage is taken from An Essay on the Influence of Authority in Matters of Opinion by Sir George Cornewall Lewis, usually reckoned a minor Utilitarian thinker. One of his major themes is that when it comes to forming rational beliefs and opinions we have to rely on others; we cannot do it alone. This is the theme to which Levy devotes chapter 4 of his book, starting with a critique of the individualism of traditional epistemology and its impossible reliance on a chimera of “unaided individual cognition”. Sir George Cornewall Lewis provides the basic argument against this view:

“I firmly believe…that the earth moves round the sun; though I know not a tittle of the evidence from which the conclusion is inferred. And my belief is perfectly rational, though it rests upon mere authority….” (page 63).

That’s the way it is: we rely on authority much more than we imagine, we have to do so, and it is fully rational to do so IF the epistemic environment is not degraded. Unlike Levy, Lewis simply asserts that the environment is a healthy one for the passage I have just quoted continues:

“….all who have scrutinised the evidence concur in confirming the fact; and have no conceivable motive to assert and diffuse the conclusion, but the liberal and beneficent desire of maintaining and propagating truth” (p 63)[4]

Were that it was ever thus! A substantial  part of Neil Levy’s book tells the depressing story of the many ways in which it ain’t like that. I will not summarise the various stories which extend well beyond the obvious ones: denying the Holocaust, denying climate change, denying the efficacy of vaccines. Levy’s central proposal is that we need to clean up the epistemic environment on which we have to rely and he looks to nudge theory for inspiration, arguing in chapter 6 (“Nudging Well”) that nudges can support our personal rationality and autonomy, not manipulatively undermine them as is often enough alleged.

I can think of just one alternative that has ever been proposed: Auguste Comte reckoned that the intellectual environment of his time was pretty polluted; he thought the solution was to quarantine himself from it. He called this “mental hygiene”. I am sure he would have recommended disconnection from Facebook, Twitter, and much more besides.

 

 



[1] Robert Darnton, Mesmerism and the End of the Enlightenment in France (1968)

[2] Trevor Pateman, https://www.academia.edu/43059186/Jacques_Lacan_in_the_text_of_Elisabeth_Geblesco

[3] Trevor Pateman, https://www.academia.edu/43058086/Majoritarianism_An_argument_from_Rousseau_and_Condorcet

[4] https://www.academia.edu/43045987/Liberty_Authority_and_the_Negative_Dialectics_of_J_S_Mill

This paper, along with that on Majoritarianism previously cited,  is based on my  M Phil thesis How Is Political Knowledge Possible? (University of Sussex, submitted 1977, awarded 1978)


Tuesday 18 January 2022

Review: Michael Morris Real Likenesses

 


 

Back in 1975 a philosopher, Colin Radford, presented a paper to the Aristotelian Society in London with the title, “How Can We Be Moved by the Fate of Anna Karenina?” The paper has since been much discussed. There are at least two ways of hearing/reading the title, one of which invites us to think of an explanation for a remarkable phenomenon and the other invites us to pause and seek to explain this hitherto taken-for-granted common phenomenon.

My guess is that most adults are much more frequently moved by the fate of characters in novels, plays, and films than they are by the fate of people in “real” life, exception perhaps made for those who work in settings like hospital intensive care units. Equally, this being moved in “unreal” situations - to tears, laughter, or orgasm [tragedy, comedy, pornography] - is almost entirely occasioned by the narrative arts. To put it bluntly, you do not often see people leaving art galleries in tears.

The capacity to be moved by narratives appears early. Many years ago, I had begun reading very short book-at-bedtime stories to my younger daughter, using illustrated books and (as you do) pointing my finger as I read. I got to the point in one story where a young girl who has been entrusted with carrying a basket of eggs to market drops them and all the eggs break, a drawing  - to which I pointed - illustrating that fact. Oh dear! At this point my daughter, aged two and a bit and until this moment listening quietly, burst into tears.

That sort of reaction is unlearnt. It is a primitive or, as I would say, it is a natural reaction. Nothing to see there, no special explanation required.

True, there is a larger adaptive-evolutionary story which can be connected to this unremarkable fact. I tell it first in relation to vision. Seeing is not something of which human beings are capable; it is something to which, unless blind, they are liable. That is thanks to a remarkably powerful vision module which goes into action at birth and rapidly achieves its more or less final state, though much later affected by the physical decline of our bodies. The vision module works very fast and very reliably which is hugely adaptive; it saves us from many accidents and other catastrophes. True, it can be fooled as it is by the very recently invented  Müller-Lyer illusion, Escher’s drawings, and trompe l’oeil paintings. But this is a small price to pay for the many benefits which accrue from the fast operation of an encapsulated system which does not generally tolerate interference from our thinking. There are exceptions: someone may guide us towards seeing something which we don't see spontaneously, often a resemblance which we didn't catch - a resemblance which was not transparent  but whose translucence could be seen through by means of a verbal hint..

Similarly, in my view, human beings are natural believers - they are credulous. That does mean that they are easily fooled but being a believer is more adaptive, on balance, than being a natural sceptic. When I go hunting with my tribe, it is really not a good idea to play the sceptic when I hear one of my fellow-hunters shout, “Watch out! Snakes in the grass!” My natural credulity leads me to do the sensible thing; I watch out, promptly and without question. As it happens, this priority of credulity can also be given some kind of philosophical justification; you cannot even get going in life - even solitary life - unless you take things on trust. Things just have to be taken as they appear if you are to get started; doubt comes later and it’s a second-order ability (or, for some people, liability).

So it’s not surprising that we are fooled by fictions. There is no need for “willing suspension of disbelief” because there is simply no disbelief to be suspended. There is a part of the brain - a module if you like - which simply does not discriminate fact and fiction. We know this in a common sense way when we complain that horror films frighten us, romantic novels make us cry, and pornographic films arouse us sexually even when we don’t want them to. Well, as in the rest of life, want doesn’t mean get.

The story can be continued. When philosophers think about the use of language, they usually have in mind acts of speaking or reading, which are thought of as voluntary acts. They should more often think about hearing. If someone says something to you, audibly, in a language you understand, you have no choice about understanding the words even if you may not understand their import. Someone says, “I like your hat” and that may be compliment or sarcasm and you may not be sure which; but it is not a voluntary act or decision to hear the words, “I like your hat”. You just hear them and understand them, at least at some basic/literal level. You may really, really dislike hearing and understanding certain words but you really, really can’t avoid it. That’s why you often press people to not use (or mention) them at all.

From that I derive the conclusion that the human capacity for language is not originally an ability; it begins as a liability - we are liable to language. That’s why children learn their mother tongue more or less effortlessly as they hear it being addressed to them and, more generally, spoken all around them. Language “acquisition” is something that happens to them, more like a viral infection than a property deal.

As a final example: the capacity to recognise objects depicted in a photograph or representational drawing is also a liability; it’s unlearnt and untaught. Deprivation experiments have been conducted to show that that is true: withhold all pictures from a child until the child can speak, and the child can name, unprompted, the objects in the photographs or drawings now revealed to them.

In his book Real Likenesses Michael Morris adopts the approach of the analytical philosopher, rather than that of the naturalist / psychologist, and seeks to characterise the formal status (ontological, epistemological, logical) of those artist-representations of things which affect us - in paintings, photographs and novels - rather as if they were real things. So a painted face is the  real likeness of a face and  a photographed face the real likeness of a face; and likewise a fictional/novelistic character is a real likeness of a character.  The argument is lucid but highly technical and I am not going to go into detail in this little review/essay. As an analytical philosopher, Morris aims for completeness, absence of contradiction, absence of falsifying examples, and no unresolved paradoxes. In philosophy, that’s always a tall order.

Towards the end of the book, Morris has a section (pp. 204-214 ) which discusses how proper names work in novels and he quotes the opening sentence of Muriel Spark's novel Memento Mori:

“Dame Lettie Colston re-filled her fountain pen and continued her letter”

The problem with this - as he partly acknowledges (p.207) - is that “Dame Lettie Colston” is not a good example of a proper name. At its very first appearance, it is much more than a proper name; it’s loaded with descriptive content. I open the novel, perhaps knowing nothing about it, but three words in I already know a lot. I now know that the novel is likely to feature prominently - unless I am being tricked by an unreliable narrator - a woman ( that takes out fifty percent of the world’s population) whose British /English title of “Dame” takes out about 99.9999% percent of the remaining fifty percent. The British/English probably resolves to English with the “Lettie Colston”, taking out the Scots, the Welsh and the Northern Irish. We are now down to a few thousand and possibly just a few hundred members of a class (“English Dames”) of whom Lettie Colston is going to be presented as a fictional member, either typical or untypical. We will have to continue reading in order to discover which. Proceed to the words “fountain pen” and with a novel published in 1959 when the battle between fountain pen and biro was fully joined, we also have to read on to discover if this shows Lettie taking sides or whether it shows that the time setting of the novel is that past in which biros did not exist. It is part of the novelist's skill to know how to keep us reading; in that first sentence, Spark tells (or shows) us quite a bit about one character  - but not quite enough. So we have to read on.

 

 

 

 

 

Saturday 15 January 2022

Review: Abigail Dean, Girl A

 


I nearly didn’t buy this book from the Waterstones table. It was suffering from Sandwich Board failure. If one shopkeeper puts out a sandwich board on the pavement, they may gain some advantage from being better noticed. But if all shopkeepers put out sandwich boards, as they now do, pedestrians ignore them because they are too busy dodging around what has become a frustrating obstacle course.

The manic sandwich board department at Harper Collins has decorated this book with fifty eight very short puffs on four sides of covers and two inside pages. Advertising 101 would have taught them that this is unlikely to convince, as recognised by one meta-puff which implores me to Believe the Hype.

It’s a pity because it’s a good novel which doesn’t need to be shoe-horned into the genre of “mystery thriller”. It’s well-crafted and structured through consistent use of alternation between Time Present and Time Past; it has a credible narrator and a credible story line developed towards a concluding revelation. The story line picks up on the known fact that the families of religious fundamentalists are often enough sites of awful child abuse of one kind or another. In the less grave cases, survivor memoirs can be found: in the recent past Tara Westover’s Educated and Rebecca Stott’s In the Days of Rain are examples which have literary as well as documentary value. Then there are the related novels, like those of Jeanette Winterson.

What I particularly admired in this novel was the way in which Abigail Dean successfully imagines very different and complex later outcomes for children who have been removed from a traumatically abusive home and placed with adoptive parents, their natural father now dead and mother in prison - not least because two of their children had died in the House of Horrors, of violence and neglect.

I’d like to rescue the book from the hype. Let me put it this way: my impression is that Abigail Dean is a better writer than, say, Bernardine Evaristo whose flat prose and happy-clappy themes I find tiring. But because she has written a “Genre Novel” then - rather like John le Carré - Abigail Dean’s book has been  put into a parallel world where it won’t be given serious attention, reckoned suitable only for sandwich board treatment. But I think it stands up to careful reading and, hopefully, it will not be a one-off achievement.