SoftestPawn’s Weblog

Oh no, not another blog!

Archive for the ‘Evidence Based Beliefs’ Category

The Tired Duck Dilemma

Posted by softestpawn on October 24, 2014

“If it looks like a duck, sounds like a duck and walks like a duck, it’s probably a duck”

A tired duck looking for a safe place to land looks for peaceful ducks on the ground as a sign that an area does not contain predators that would frighten it.

This is also how ducks are shot: an artificial duck is placed in the open, a duck squawker is squawked, and the lure might be moved gently with a fishing line. Passing tired ducks see peaceful duck is peaceful, and fly into the guns of the hidden hunters.

Cautionary tales like this are used to remind us not to judge by appearance; to avoid letting our prejudices drive our decisions without the right evidence.

But that’s a logical failure too. Tired duck is tired; it has to make a decision now, on the evidence it has, about whether to land or struggle to fly to the next possibly safe place. Waiting for more evidence carries risks too.

So what can tired duck do? It can use its background experience – its models of the world, its prejudices, its heuristics tempered by a bit of careful thought – to tell it things about likelihood. Does peaceful duck look and sound and move very much like a duck? Is it the right time of year for that kind of duck look and duck sound and those duck moves that it’s throwing? That judgement will depend strongly on experience in order to ‘fill in’ the assessed situation from tiny bits of evidence. And if tired duck judges it safe, tries to land and is shot, as it plummets to the ground it can always console itself that if it had gone somewhere else it would only have been faced with the same, tired dilemma.

Posted in Evidence Based Beliefs | Leave a Comment »

Debunking the Debunking Handbook

Posted by softestpawn on January 16, 2012

SkepticalScience has published the Debunking Handbook that is intended to summarise how you show an argument is wrong. Unfortunately… it is itself wrong in some fairly fundamental ways.

The summary at the beginning says:

“Debunking myths is problematic. Unless great care is taken, any effort to debunk misinformation can inadvertently reinforce the very myths one seeks to correct. To avoid these “backfire effects”, an effective debunking requires three major elements. First, the refutation must focus on core facts rather than the myth to avoid the misinformation becoming more familiar. Second, any mention of a myth should be preceded by explicit warnings to notify the reader that the upcoming information is false. Finally, the refutation should include an alternative explanation that accounts for important qualities in the original misinformation”

For a start this is phrased to suggest that you already know what is fact and what is ‘myth’. ie, this is not a way of evaluating an argument for its worth or otherwise, but a way of selling a specific argument.

It’s not a debunk manual, it’s a ‘spin’ manual.

First: the refutation must focus on core facts rather than the myth

Any non-trivial problem has myriads of facts that can be interpreted in different ways to suggest different conclusions – this is what makes understanding people, the world and the universe so interesting. This guide says you should push the facts that support your views and avoid analysing those that contradict them. This is far from ‘debunking’ an argument; to focus on specific facts and avoid others is spin.

The example given is the claim that (some) climate skeptics claim that the sun has driven recent climate warming. The debunking is supposedly that the sun’s measured total radiation output does not match warming in the last very few decades, and therefore the skeptic claim is wrong. By itself, this is Fine and Good, but ignores the myriad effects that various solar outputs – different particles and radiation wavelengths – have on the atmosphere and so temperatures. The conclusion may well be right, but the text ignores or oversimplifies the facts that support an alternate view and picks those that support the agenda of the so-called ‘debunker’. This is not a debunk, it’s a sell.

Second: any mention of a myth should be preceded by explicit warnings

This is an obvious statement of intent: a claim that the argument is wrong without saying why.

It’s not even a refutation, let alone a debunk.

Finally: the refutation should include an alternative explanation

This is clearly wrong as it has nothing to do with showing how the initial argument is wrong, and can result in missing the point.

If you claim that aliens move clouds around, I can counter with a similarly clueless argument that the clouds are sentient and move themselves. The discussion can then move to how silly it is that clouds are sentient, and so lose the focus from evaluating the original claim about aliens.

At worst, having shown that it is silly to think that clouds are sentient, a (poor) conclusion is that therefore aliens do indeed move clouds around, as the only alternative considered.

The Worldview Backfire Effect

It is ironic that such a publication should talk about how people are biased by their “worldviews and sense of cultural identity” without considering how they might affect the authors.

In particular I enjoyed the phrase “Self-affirmation and framing aren’t about manipulating people” because, clearly, they are (see also, for example, the very interesting article by Kahneman and Tversky Judgements Under Uncertainty). That’s what is interesting about them.

Removing framing to get at the underlying objective data and arguments is extremely difficult, and will continue to be so while publications like the “Debunking Handbook” encourage others to muddy the waters.

Posted in Evidence Based Beliefs, Metadebates, Science | Tagged: , , | Leave a Comment »

BBC’s (Im)partial Science Reporting

Posted by softestpawn on September 23, 2010

The BBC is holding another review on its impartiality, this time on how it presents scientific subjects: Science impartiality review – terms of reference (PDF). It has existing guidelines, and it has held such reviews before on how it reports on subjects such as religion and the middle east. This is all Good Stuff, as the BBC’s reputation rests somewhat on the quality and reliability of its reporting, and reliability requires, among other things, impartial reporting. 

One of the many frustrations for medically trained scientists however is the airtime and article space given to ‘alternative’ treatments such as homeopathy, reiki, accupuncture and so on. These are treatments that have not passed the objective tests used to identify those that actually work. These tests (double blinded, randomised control groups, etc) are meant to bypass the personal and social prejudices and biases that affect our abilities to properly evaluate effectiveness. They do not always succeed.

The concern is largely that by giving publicity to unproved, useless and sometimes dangerous treatments, the BBC lends them credibility and authority, and so more people may be taken in by them.  By providing BBC publicity to such sites as JABS, people may believe them to be officially sanctioned.

And so these concerned people do not want the BBC to give equal space to these cranks, charlatans and quacks. Such reporting is not truly balanced, they claim. If you’re going to report science, they say, you should report scientific science not pseudoscience.

Scientific science vs pseudoscience

Which all sounds well and good, but the BBC does not have the funds or indeed the expertise to properly evaluate every controversial issue.  For a start, only a few controversies can be tested in the clearly objective way that medical treatments can. 

The BBC may instead decide to defer all evaluation to certain establishment scientists and report only the expert opinions of people with certain qualifications from certain institutions; but this is not scientific. It’s not uncommen for academic research scientists to fall prey to their own or others pseudoscience, even in related fields.

Nor does the BBC have the remit to make such evaluations or deferrals. A public controversy is one with many people who believe opposing things, for frequently unscientific reasons, and the BBC’s audience is public. If the BBC were to fail to report the views of such people and how they were derived, then it is failing to engage with or inform the discussion.

The concerned may argue that such a discussion is not a scientific one: a programme on ghosts has no place under the Science label for example. Yet the evaluation of sparse evidence is vital to science; a negative result is still a useful result. And we need not be sheltered from uncertain and ambiguous evidence, leaving us to make up our own minds – this too, is science. 

Impartial to the audience, not the evidence

Impartiality is not the same as correctness. The BBC can and should provide time to the different parties in a discussion that the general public is interested in.

This doesn’t mean having to give airtime to any old crackpot view, but if large proportions of the public are, say, worried about vaccinations then it is quite right of the BBC to air those concerns along with objective evaluations of them. The BBC rightly provides a platform for those advocates to present their case to the public, for the public to evaluate. 

The public – everyone – is indeed ignorant and stooopid about most subjects (who has time to evaluate everything?). But to be protected from our own folly and expertise by filtering what is presented to us leaves us in the hands – and frequently inexpert opinions – of those doing the filtering. 

So yes, let’s have links to sources so we can check back and do our own evaluations. Let’s have more entertaining educating articles and programmes such as those from More-or-Less and Ben Goldacre. And let’s have more time to hear the cases rather than have them forced into small soundbites. 

But let’s not start letting partisan groups decide on our behalf what we should hear about when it comes to science topics. Because that’s really not good scientific practice.

From Stuff and Nonsense &  DC’s Improbable Science, although the review started back in March. A cutdown version of this has been sent to the BBC’s feedback email: trust.science@bbc.co.uk

Posted in Bad Journalism, Evidence Based Beliefs, Science | Tagged: , | Leave a Comment »

When there is no evidence

Posted by softestpawn on October 13, 2009

or “Now where did I leave my glasses?”

An engineer, a mathematician and a physicist look out of a window of a train passing through the highlands of Scotland, and see a black sheep.

“Ah!”, says the engineer, “Look, they have sheep in Scotland!”.

The physicist looks at it and reflects “Well, we can say only there is at least one black sheep in Scotland”.

The mathematician looks at them both in surprise “That’s not right at all! All we can tell is that one side of one sheep in Scotland is black”

Ho. Ho.

This is one in a series of posts , about evidence and how it does or does not support a claim.

Such pedantry is over the top, but it serves to illustrate a simple point about what we can really infer from a piece of evidence, and its limitations.

It’s easy to decide what to believe when you have clear positive evidence in your hand: the photo of your girlfriend in bed with that sysadmin from finance is fairly firm (heh) evidence of her faithlessness.

It’s not so clear when there is no evidence for something, and interestingly there is more than one way in which we can not have evidence for something.

‘Bounding’ what we don’t know

Those who are old enough will know what it’s like: we lose our reading glasses, or the screwdriver, or pen or mug of tea we had in hand only a few minutes ago.

At this stage, if somebody rather stupidly asks “Well where did you leave them?” we would rightly and angrily reply “I don’t know, if I knew where they were, I wouldn’t have lost them”.

So we don’t know where they are, but we do know some places where they are not. They are not on Mars. And although we may not remember all the rooms we’ve been to since we last remember having them, we tend to remember unusual places; mine are not, for example, in the attic as I know I haven’t been there.

This gives us some limits, some ‘boundaries’, to the area of ignorance.

Reducing the area of ignorance

As we start to look we start to limit these boundaries further.

A quick walk around the usual rooms glancing at the surfaces for example is a good first stage search; it covers a lot of ground for a fairly likely result.

By discounting the places we’ve looked – and by starting with the most likely and easily surveyed places – we reduce the places the glasses could be.

“Knowing it is not” is not “Not Knowing”

Having thoroughly searched the mostly empty fridge, I know to a high degree of confidence that my glasses are not in there.

I have no evidence of that, and I have no proof that they are not (I may have forgotten to search the bowl of three week old leftover gravy) but my memories of looking are ‘evidence of no glasses in the fridge’ (‘evidence of lack’) rather than ‘no evidence of glasses in the fridge’ (‘lack of evidence’).

The latter though is still how you might reply to “Are they in the fridge?”, even though it doesn’t capture whether you’ve looked or not.

This causes problems when people want to know if there’s any danger in some treatment or chemical. To be told “There is no evidence of any harm” is useless; it doesn’t tell us if nobody’s checked, or if they’ve had a quick look and everything seems fine, or they’ve had a really very thorough search that would have turned up any significant harm and found nothing.

What we think we don’t know

So I continue with my exercise in limiting my ignorance, hoping one day to find my glasses so that I can carry on doing what I was doing… whatever that was… it will come to me in a moment… and sometimes we get a bit irrational. How many times, frustrated, have we looked in the same box, under the same small piece of paper that couldn’t possibly hide a pair of glasses?

Similarly our boundary reducing exercise is not ‘certain’; it may be I’ve looked somewhere but not seen them (after all, I’m not wearing my glasses). It may be I’ve looked in an area where they are hidden, and have declared and marked the whole area ‘glasses free’ when in fact it is not.

In more general terms, not finding doesn’t necessarily mean it’s not there: just because all the swans I’ve seen are white, does not mean there are no black swans.

This is where we reach the limits of our understanding of our limits of our ignorance. We rarely properly match the boundaries of what we think we don’t know with what we actually don’t know.

This gap is where my lost glasses still lurk when I give up and use an old pair: in the world of places I haven’t looked well enough, but can’t think of to look.

Theories of what might be

I was slightly too certain above about where my glasses are not, as aliens might have stolen them and taken them to the Mars.

And if we return to the railway carriage with the sheep-observing pedants, we might claim that “There are luminous pink sheep in Scotland with legs on one side shorter than the other. Scostmen hunt them down and turn them into haggis and bagpipes”.

We can make up any silly story we like (“Aliens live in clouds!” “Pixies ate my hamster!” “Magnets healed my cancer!” “Hair loss makes you sexy!”) and some may be accidentally true but it’s no more sensible to assume they are true without evidence than it is to believe in Garibaldi Mountain Shrews.

Not knowing something is no excuse to make up any old thing and then believe it to be true, any more than it is to believe that your glasses are in the kitchen, because you don’t know where they are, and they could be.

This is the ‘out’ for a lot of so-called ‘open minded’ views: “Just because you haven’t seen pixies, you must be close-minded to disbelieve them”.

Pixies might exist, this is true.

But when you consider all the things that might exist, such as invisible baby-eating multi-coloured pixie-swans that live with aliens in clouds, then you can see that believing in any random made up fantasy can be fun but it’s not very practical.

If someone tells you some far fetched story and says “well, you’ve got no evidence against it, so it could be true couldn’t it?” then the answer is “yes, and aliens are painting your ears”

“I don’t know”

A straightforward ‘we don’t know’ seems a bit of a cop out, and the mind abhors a vacuum, but this is no excuse to fill it with speculation and then infer ‘truths’ from them. (It’s fine to speculate and test: Perhaps the glasses are in the bathroom? I shall go and look)

Even if you’ve only ever seen white swans, you can’t be sure that all swans are white. You just might not have seen one that isn’t.

Yet lack of evidence is not evidence of lack; just because we haven’t seen something doesn’t mean it’s not there.

It’s alright to say we don’t know. It’s alright to say we think things are likely, or unlikely, but we’re not sure. And working out what we don’t know – or what exactly we’re not sure about – tells us a very valuable thing: what we still need to find out in order to know.

Background evidence

The above of course is a bit “simple”. It ignores all the background evidence we hold; there are very few sheep with black on one side and white on the other, so we can happily infer that a sheep is black from seeing the one side of it that is. This is material for another post…

Posted in Evidence Based Beliefs, Science | Tagged: , , | 2 Comments »

Deferring to Authority

Posted by softestpawn on August 1, 2009

“Deferring to Authority” is to claim someone’s opinion is valuable because they are an expert in the subject. “Arguing from Authority” is to claim that your opinion is valuable because you are an expert.

This is one in a series of posts about evidence and how it does or does not support a claim. Although this particular one is about opinion, not evidence…

There’s nothing very bad about using an expert’s opinion, but it can be seen as irrelevent in a controversial subject where the evidence is being discussed. If you are arguing about whether homeopathy can cure cancer then you want to look at the facts, not quote people who might have vested interests. Or others who might also have vested interests.

Workable Life

Few of us have the time, inclination, or expertise to carefully check every single thing that we decide. Using expert opinion is a perfectly sensible way to approach life.

There are just a few things to bear in mind:

  • Authority might not be Authority
  • Authority isn’t always expert
  • Authority can be wrong.

Authority might not really be authority

Gillian McKeith is famous for handing out advice on diet, and claims to be an expert on the subject. She even uses the title ‘Dr’ – a title that is restricted in the UK to try and prevent people from claiming a qualification they don’t have. The advertising standards authority were notified, proceedings pursued, and she has been forced to (mostly) stop using it. She has no training, qualifications or authority for her opinions on poo, carrots or lard-fried chips.

But you’ll notice here I’ve not put any links; this is my opinion. It’s based on expertise gathered from far too many hours on Ben Goldacre’s BadScience web site, where the actions to stop her misuse of the title were first formed by some of the forum regulars. So I am an expert, but you should check it yourself, rather than believe me… Google is only a few clicks away…

Authority might not be an expert

Expertise is often quite narrow. People who work in the ‘environment’ industry are not experts in all of it; thus the opinions of neither David Bellamy nor David Attenborough on global warming are expert. It goes further than that; an expert in one particular field of climate cannot claim expertise in the whole subject.

This specialisation can sometimes be quite surprising. Your General Practice doctor has quite a wide medical expertise, but when it comes to a road accident, you will probably be better off with a an amateur St John’s Ambulance volunteer.

Some authority qualifications are so broad as to be meaningless and offer no real authority at all. A “scientist” is generally just somebody who researches something and is no more intelligent for it than many other professions; a “government scientist” in particular is not an expert on everything, any more than you are or I am.

Scientists running in packs (or ‘committees’ as they are sometimes known) are similarly suspect. Pronouncements from Committees of Scientists on things outside their fields of expertise should be treated cautiously.

Authority might just be wrong

This is perhaps too obvious to bother mentioning, but in any controversial field we expect differences of opinions amongst experts; some of them must be wrong.

And sometimes great swathes of experts in a field can be quite spectacularly wrong.

So?

Expert opinion is a perfectly sensible thing to use. My GP’s expertise trumps pretty much anyone else I know on medical matters.

Just remember that sufficient evidence trumps opinion every time.

Further Reading

Argument From Authority (Wikipedia)

Deferring to Authority: Popular Science Communication as a language of control (PDF)

Posted in Evidence Based Beliefs, Metadebates, Politics, Science | Tagged: , , , , , | Leave a Comment »