SoftestPawn’s Weblog

Oh no, not another blog!

Archive for the ‘Science’ Category

The Tired Duck Dilemma

Posted by softestpawn on October 24, 2014

“If it looks like a duck, sounds like a duck and walks like a duck, it’s probably a duck”

A tired duck looking for a safe place to land looks for peaceful ducks on the ground as a sign that an area does not contain predators that would frighten it.

This is also how ducks are shot: an artificial duck is placed in the open, a duck squawker is squawked, and the lure might be moved gently with a fishing line. Passing tired ducks see peaceful duck is peaceful, and fly into the guns of the hidden hunters.

Cautionary tales like this are used to remind us not to judge by appearance; to avoid letting our prejudices drive our decisions without the right evidence.

But that’s a logical failure too. Tired duck is tired; it has to make a decision now, on the evidence it has, about whether to land or struggle to fly to the next possibly safe place. Waiting for more evidence carries risks too.

So what can tired duck do? It can use its background experience – its models of the world, its prejudices, its heuristics tempered by a bit of careful thought – to tell it things about likelihood. Does peaceful duck look and sound and move very much like a duck? Is it the right time of year for that kind of duck look and duck sound and those duck moves that it’s throwing? That judgement will depend strongly on experience in order to ‘fill in’ the assessed situation from tiny bits of evidence. And if tired duck judges it safe, tries to land and is shot, as it plummets to the ground it can always console itself that if it had gone somewhere else it would only have been faced with the same, tired dilemma.


Posted in Evidence Based Beliefs | Leave a Comment »

Time to Pop Popper’s Popularity

Posted by softestpawn on October 26, 2012

The modern solid sciences are based around the concept of ‘falsifiability’; that theories and hypothesis have to be ‘falsifiable’ in order to be scientifically valid.

Unfortunately there is little if any introspection in the solid sciences (that is left to beardy, sandally philosophy-of-science types, and what do they know of real science?) so solid scientists rarely reflect upon whether this falsifiability is a good, logical, or even scientific approach to research. And frankly, it’s not. This is embarrassing for me as I too, in common with many other from that tribe, have smugly declared that only through hypothesis testing can we do rigorous science. I wonder now, looking back, on what an arse I must have appeared to those with a more, well, scientific approach to science.

This “falsifiability” approach is essentially derived from Karl Popper’s thoughts on provability, given for example in his book “The Logic of Scientific Discovery”. Researchers (or ‘scientists’ or ‘philosophers of science’ or perhaps just ‘curious people’) in the late 19th and early 20th century were struggling with how theories and statements about the world can be supported or otherwise by facts. Popper’s book should really be seen as part of the discussion around concepts of proof rather than as the conclusion about scientific investigation that it has become in some quarters.

Popper started by ‘demarcating’ research: by categorising disciplines into things he thought were science (astronomy, physics) and those he thought were not (astrology and… psycho-analysis…) and then looked for common themes in each to define what makes research scientific and what makes it not. As an initial poke at a subject, looking for common themes is interesting, but as a scientific one it is appalling. It depends heavily on rather personal decisions about which discipline is scientific, and results in a circular rather than scientific argument: ‘I think these things are Science, therefore the way they do work must be scientific, and because they work that way therefore they are objectively Science’. In fact we can’t easily tell whether these disciplines have provided (or not) useful theories about the world because of their practices at the time or in spite of them.

His key conclusion from this personal demarcation was the statement that we can only logically safely deduce statements, not adduce or induce. That is, we cannot prove that theories are generally true, or even true for any untested range; we can only disprove a theory when we discover facts that contradict it. Therefore good theories are ones that can be contradicted.

That’s logically sound, and all very well, but the point about research is we want to find theories that predict. We want to be able to understand what will (probably) happen if we do something we have not done before. We want some idea of confidence in the untested areas of a theory. Deduction is nearly useless.

The result is the pointless and distracting ‘null hypothesis’ introduced to modern experiments. Because you can only ‘disprove’ theories, when Popperians come up with a new theory, they have to invent a ‘null hypothesis’ to have something to disprove in their experiments. Disproving this somehow ‘supports’ the experimenters hypothesis. In fact the null hypothesis has no logical value and its disproof can give a false impression. For example, if you have a theory that a new teaching method can improve student’s reading speed, the ‘null hypothesis’ will be that there is no difference. Now almost any experiment is likely to make some difference to student’s reading speeds, so the null hypothesis is nearly always disproved (there are better ways of framing this particular experiment, but they all revolve around trying to fit around a null hypothesis that has no value except to mark the work as ‘scientific’. Confidence Intervals would be better and more, well, scientific)

What we need, and is currently done with a rather ‘common sense’ rather than rigorous approach, is a systematic approach to understanding what parts of a theory we can be confident in (and for what degrees of confidence) over what ranges.

I’ll be back to you on that…


Posted in Science | Tagged: , , , | 1 Comment »

Debunking the Debunking Handbook

Posted by softestpawn on January 16, 2012

SkepticalScience has published the Debunking Handbook that is intended to summarise how you show an argument is wrong. Unfortunately… it is itself wrong in some fairly fundamental ways.

The summary at the beginning says:

“Debunking myths is problematic. Unless great care is taken, any effort to debunk misinformation can inadvertently reinforce the very myths one seeks to correct. To avoid these “backfire effects”, an effective debunking requires three major elements. First, the refutation must focus on core facts rather than the myth to avoid the misinformation becoming more familiar. Second, any mention of a myth should be preceded by explicit warnings to notify the reader that the upcoming information is false. Finally, the refutation should include an alternative explanation that accounts for important qualities in the original misinformation”

For a start this is phrased to suggest that you already know what is fact and what is ‘myth’. ie, this is not a way of evaluating an argument for its worth or otherwise, but a way of selling a specific argument.

It’s not a debunk manual, it’s a ‘spin’ manual.

First: the refutation must focus on core facts rather than the myth

Any non-trivial problem has myriads of facts that can be interpreted in different ways to suggest different conclusions – this is what makes understanding people, the world and the universe so interesting. This guide says you should push the facts that support your views and avoid analysing those that contradict them. This is far from ‘debunking’ an argument; to focus on specific facts and avoid others is spin.

The example given is the claim that (some) climate skeptics claim that the sun has driven recent climate warming. The debunking is supposedly that the sun’s measured total radiation output does not match warming in the last very few decades, and therefore the skeptic claim is wrong. By itself, this is Fine and Good, but ignores the myriad effects that various solar outputs – different particles and radiation wavelengths – have on the atmosphere and so temperatures. The conclusion may well be right, but the text ignores or oversimplifies the facts that support an alternate view and picks those that support the agenda of the so-called ‘debunker’. This is not a debunk, it’s a sell.

Second: any mention of a myth should be preceded by explicit warnings

This is an obvious statement of intent: a claim that the argument is wrong without saying why.

It’s not even a refutation, let alone a debunk.

Finally: the refutation should include an alternative explanation

This is clearly wrong as it has nothing to do with showing how the initial argument is wrong, and can result in missing the point.

If you claim that aliens move clouds around, I can counter with a similarly clueless argument that the clouds are sentient and move themselves. The discussion can then move to how silly it is that clouds are sentient, and so lose the focus from evaluating the original claim about aliens.

At worst, having shown that it is silly to think that clouds are sentient, a (poor) conclusion is that therefore aliens do indeed move clouds around, as the only alternative considered.

The Worldview Backfire Effect

It is ironic that such a publication should talk about how people are biased by their “worldviews and sense of cultural identity” without considering how they might affect the authors.

In particular I enjoyed the phrase “Self-affirmation and framing aren’t about manipulating people” because, clearly, they are (see also, for example, the very interesting article by Kahneman and Tversky Judgements Under Uncertainty). That’s what is interesting about them.

Removing framing to get at the underlying objective data and arguments is extremely difficult, and will continue to be so while publications like the “Debunking Handbook” encourage others to muddy the waters.

Posted in Evidence Based Beliefs, Metadebates, Science | Tagged: , , | Leave a Comment »

BBC’s (Im)partial Science Reporting

Posted by softestpawn on September 23, 2010

The BBC is holding another review on its impartiality, this time on how it presents scientific subjects: Science impartiality review – terms of reference (PDF). It has existing guidelines, and it has held such reviews before on how it reports on subjects such as religion and the middle east. This is all Good Stuff, as the BBC’s reputation rests somewhat on the quality and reliability of its reporting, and reliability requires, among other things, impartial reporting. 

One of the many frustrations for medically trained scientists however is the airtime and article space given to ‘alternative’ treatments such as homeopathy, reiki, accupuncture and so on. These are treatments that have not passed the objective tests used to identify those that actually work. These tests (double blinded, randomised control groups, etc) are meant to bypass the personal and social prejudices and biases that affect our abilities to properly evaluate effectiveness. They do not always succeed.

The concern is largely that by giving publicity to unproved, useless and sometimes dangerous treatments, the BBC lends them credibility and authority, and so more people may be taken in by them.  By providing BBC publicity to such sites as JABS, people may believe them to be officially sanctioned.

And so these concerned people do not want the BBC to give equal space to these cranks, charlatans and quacks. Such reporting is not truly balanced, they claim. If you’re going to report science, they say, you should report scientific science not pseudoscience.

Scientific science vs pseudoscience

Which all sounds well and good, but the BBC does not have the funds or indeed the expertise to properly evaluate every controversial issue.  For a start, only a few controversies can be tested in the clearly objective way that medical treatments can. 

The BBC may instead decide to defer all evaluation to certain establishment scientists and report only the expert opinions of people with certain qualifications from certain institutions; but this is not scientific. It’s not uncommen for academic research scientists to fall prey to their own or others pseudoscience, even in related fields.

Nor does the BBC have the remit to make such evaluations or deferrals. A public controversy is one with many people who believe opposing things, for frequently unscientific reasons, and the BBC’s audience is public. If the BBC were to fail to report the views of such people and how they were derived, then it is failing to engage with or inform the discussion.

The concerned may argue that such a discussion is not a scientific one: a programme on ghosts has no place under the Science label for example. Yet the evaluation of sparse evidence is vital to science; a negative result is still a useful result. And we need not be sheltered from uncertain and ambiguous evidence, leaving us to make up our own minds – this too, is science. 

Impartial to the audience, not the evidence

Impartiality is not the same as correctness. The BBC can and should provide time to the different parties in a discussion that the general public is interested in.

This doesn’t mean having to give airtime to any old crackpot view, but if large proportions of the public are, say, worried about vaccinations then it is quite right of the BBC to air those concerns along with objective evaluations of them. The BBC rightly provides a platform for those advocates to present their case to the public, for the public to evaluate. 

The public – everyone – is indeed ignorant and stooopid about most subjects (who has time to evaluate everything?). But to be protected from our own folly and expertise by filtering what is presented to us leaves us in the hands – and frequently inexpert opinions – of those doing the filtering. 

So yes, let’s have links to sources so we can check back and do our own evaluations. Let’s have more entertaining educating articles and programmes such as those from More-or-Less and Ben Goldacre. And let’s have more time to hear the cases rather than have them forced into small soundbites. 

But let’s not start letting partisan groups decide on our behalf what we should hear about when it comes to science topics. Because that’s really not good scientific practice.

From Stuff and Nonsense &  DC’s Improbable Science, although the review started back in March. A cutdown version of this has been sent to the BBC’s feedback email:

Posted in Bad Journalism, Evidence Based Beliefs, Science | Tagged: , | Leave a Comment »

The Painful Subject of… Interrogative Torture

Posted by softestpawn on July 20, 2010

Some time ago, following memos and documents released from the interrogations at Guatamalo and others, some writers argued that torture doesn’t work:

and a raft of others.

In this article, long awaited by both readers of this blog, I’m going to ask in a rather Delve-special way:

  1. Does torture work?
  2. Why, then, do some people claim that it does not?
  3. What do these claims and  the way they are supported tell us about the way ‘evidence-based policy’ should be examined and tested?

For the purposes of this article, torture is direct physical pain or maiming. Whereas this, including invasion of personal space, isolation and sleep deprivation and this use of psychologists are not. They are certainly unpleasant, but not in the same league as having – or threatening to have – your fingernails torn out. If they are ineffective it might simply be that they are not gruesome enough.

Also, I’m only looking at how it ‘works’ as a direct interrogation technique, rather than whether it ‘works’ as a suitably effective social tool.

While I go over the collected evidence, consider these two situations:

  1. A bloke knocks on your door and tries to persuade you to give him your car keys.
  2. A bloke breaks in through the door, pokes out one of your eyeballs and threatens to tear various new holes through your particularly painfuls until you tell him where your car keys are.

Where Torture Works

There is, I think, no need to give examples of victims under torture confessing to all kinds of things that they may or may not have done. Even the articles above recognise that ‘with torture you can make people say anything’. While this is rightly considered somewhat beside the point, if torture can force people to ‘confess’ to shameful things that would be against their principles, it should not be surprising that it might force people to confess what they know.

All the same, let’s have some evidence.

For fairly obvious ethical reasons there is little in the way of modern reliable random controlled trials on the efficacy of torture. Similarly, many regimes with widespread routine torture are thankfully gone and their records not generally available.

Lack of reliable monitored evidence however is not evidence of lack of effectiveness.  We can start by looking at historical accounts.

There is indirect evidence in the precautions taken by WW2 spies who were routinely going into environments where interrogative torture was likely. Organisation cells were created to prevent single points of failure, and key locations hidden from the staff.  This tells us that people who worked in these environments expected torture to give away information. It can be argued from the safety of this armchair that this is only circumstantial evidence – perhaps that fear was unfounded – and so should be discarded.

When it comes to more direct examples, despite the courage required to resist such terrible action, a stigma is attached to ‘breaking’ under torture.  More frequent – more newsworthy, more heroic – are accounts of resisting.

So where accounts give examples of victims revealing information, they are usually not named.  Some do and are controversial, for example Rene Hardy is said to have betrayed Jean Moulin among others.

Some less so include this obituary of Louis Handschuh, where an escapee is captured and tortured and gives up the names of thirteen others. And in this obituary of Andrée Peel, she was “betrayed by a fellow agent who had been arrested by the Gestapo and threatened with the torture of his family”

Accounts of Miguel Enriquez’s discovery and death in Cuba commonly give it due to MIR members talking under torture (and here)

Zoya Kosmodemjanskaja is betrayed by a captured colleague

Von Ruffin is named by another gay man who was tortured.

Succumbing to torture, someone named the entire Daman family

Henry Ballard betrays under torture the other members of the conspiracy to place Mary on the English throne

Turning to more ancient examples, the conquistador Pizarro tortured Incas to locate their king Atahualpa

Sinan Pasha captured and tortured – by impaling – Prince Jem’s couriers to force them to reveal their messages, as described by Freeman in “Jem Sultan“.

Rather famously for UK readers, Guy Fawkes was tortured over several days to extract the names of his co-conspirators, leaving him barely able to sign his confessions.

And so on.

None of the above however are primary (first-hand) accounts, and apart from Pizarro’s are poorly verified.  More careful work has been done by Darius Rejali and describes situations where it most definitely worked. Maybe not efficiently, but it’s not easy to directly compare that efficiency with that of any other method (more later).

In practice, however, we can see evidence all around us of how pain – or the threat of pain – causes us to do, say or give things we would rather not. We even have a name for it:

Robbery: Theft with Violence

Not all the violence in robbery is to cause compliance under duress. Sometimes it is to disable the victim. Sometimes to temporarily prevent the victim from interfering – a shove out of the way, a push to the floor. Sometimes it is for the fun of it.

But there are hundreds of thousands of robberies a year in the UK, and plenty of examples where pain or death – or the threat of it – resulted in compliance:

…also held up a newsagent at knifepoint and stole £150 earlier in the month…

…if you shout any more I will break your neck…

…assaulted her while demanding money. He then fled with a three-figure some of money…

…The men armed with weapons dragged Mr Bowers-Lovett from his bed and demanded he opened a safe. They continued to assault him and his wife before they escaped with a substantial amount of cash…

…two males rushed towards him from behind before threatening him and demanding he hand over his possessions. He handed over his wallet and mobile phone…

…Two men forced their way into the premises and threatened an 64 year old man with a knife before demanding that he handed over his money. The suspects then fled the scene with a sum of cash…

…he was confronted by a male carrying a knife, who demanded he hand over his money and mobile phone.  The 27-year-old complied…

…five masked men kicked down the front door of his house on Kingscroft Close, Streetly. They beat the victim, demanding the keys to his [car]… Also in the house were the man’s wife and two children…

…two men appeared behind her and pushed her inside, demanding that she open the safe. The men then escaped with a substantial amount of cash…

…Two men approached a 28-year-old man and threatened him with a knife, demanding the keys to the car. They punched him two or three times before driving off in it…

…assaulted him and his 21-year-old house mate. The housemate was then taken to a nearby cash point … by one of the robbers, while the other man was kept at home by the other robber. Both men left the address when they had returned from the cash point…

And that’s just a tiny sample of the terrifying ordeals where the victim complies under duress. Few approach the even more daunting prospect of facing deliberate torture for days, months or years on end.

The occasions when the robbery is resisted are more public, more newsworthy, more rare. While we can call these people heroes (or stubborn, perhaps) that is not a reason to call those who folded cowards. It is normal to succumb.


There are also of course lots of examples of victims resisting and saying nothing, or giving false information.  This appears to be the main argument in the articles linked to at the beginning: a rather extreme ‘test case’ (getting the location of a bomb from a terrorist) is proposed and then used as a representative example. And since the victim may lie, you can’t trust torture, and therefore torture does not work.

And there is some watery reasoning that torture, because it makes victims more suggestable and confused, also makes their testimony less reliable, and again, since it is not completely trustable, therefore it is useless.

Indeed, if you put someone on the rack in order to find the location of the bomb, and you hurt him until he tells you stuff, you can’t tell if what he’s told you is true. But then, anyone under any kind of interrogation may be lying to you, or confused, or suggestable. We might as well not bother asking anyone anything. All the same, torture may be about as reliable as any other interrogation but with more shouting, moral turpitude and cleaning up.

And so we hit the nub problem of any interrogation: how does a competent interrogator ensure that what he gets is good, useful information and not just what he wants to hear, or even biased by his own preferences?

It’s a good question. A difficult question. And if you don’t work out an answer soon, we’ll send someone around to beat one out of you.

Improving Reliability

Generally speaking, you check what has been said against what has or can be verified, against previous testimony, or other testimony. Once you’ve got those loops in place, it can be much easier to ‘encourage’ the victim to be truthful. It’s where the ‘ticking bomb’ scenario above falls down – it assumes that if you can’t verify information given, then torture might not work (as indeed any other interrogation might fail) and therefore torture doesn’t work at all.

It seems generally accepted that, when broken, “people will say anything to make the pain stop”, which is of course exactly the idea. They may make stuff up, but a competent interrogator does not take what is said at face value.

If you’re in the business of gathering contacts for example, false information leads to to dead ends. Useful information can lead to more evidence. And if your victim is still in the loop – if you come back to them having checked their information – then this can be used to ‘encourage’ better performance.

And if they know you will hurt them if they get anything wrong once it’s been verified, there’s a very good incentive to tell the Truth in the first place.  An incentive that isn’t there if you’re just chatting over tea and biscuits.

Evidence Based Policy

Evidence based policy is the new religion amongst some people, including myself and at least two of the bloggers linked to at the start. But to use it we need to  understand what it actually means.  Ben Goldacre, for example, is  a great public advocate of it but even he confuses written reports with ‘sound scientific evidence’.

We need to consider what evidence is valid, how much is sufficient, we need to have systems to include new evidence to adjust and perhaps reverse policy, methods to cope with uncertainty and incompleteness and poor quality, and mitigatations for  biases in its collection and summary.  And we need to remember that sometimes it is irrelevant.

It is difficult to ‘prove a negative’ (such as proving that aliens do not live in clouds, or that torture does not work), and when assembling circumstantial evidence to try and do so, it is vital to consider contradictory evidence and remove duplications under different disguises before drawing conclusions from it. Particuarly, it’s important to remember that just because something sometimes doesn’t work, this is no reason to reject that it does work sometimes, or even usually. “Smoking kills”, for example.

The supporting evidence given by the writers above are examples of collection bias, of morally-driven analysis:  the evidence collection (in this case, largely the blindness to widespread contradictory evidence) is shaped by the wishes of the collector.

Perhaps this bias is because it is a morally fraught subject. Dearly held fundamental principles tend to contradict each other when trying to weigh up the pros and cons of whether, or under what conditions, we as a society should tolerate torture. Some of them do not even compare well: the principles of acting vs not acting, of saving lives vs not being ‘dirtied’. Pretty much any conclusion is imperfect, and would result in pain and loss for some people somewhere.

Wouldn’t it be easy to just bypass these quite horrible comparisons? And gosh, here’s an opportunity. After all, if torture doesn’t work, then there’s no need to have to do all that difficult introspection.

By ‘removing’ the utility argument, we remove the need for any difficult moral argument.

The difficulty of the moral question encourages the selection, ignoring, twisting and shaping of evidence to fit the desired policy rather than the other way around. The intent, possibly, is to show that torture does not work, whether or not you think it’s moral, so therefore there’s no practical reason to use it. Rather than that torture is morally indefensible, whether or not it works.

So Torture Works?

Of course it does. Would you not have given up your car keys? Of course you would. Life and health is more important than a car. But would you have given up your children?  That’s a very different motivation.

So torture works sometimes, in the same way that “Smoking kills (sometimes)”. The reverse is not true: “Smoking does not kill” is a difficult claim to support.

And there are (sometimes) more effective methods available. Ordinary persuasion, perhaps via the introduction of alternative views from respectable leader figures, appears to (sometimes) offer good results. Someone who is converted will (sometimes) offer far better cooperation, but, of course, it’s only sometimes possible to convert dedicated foes.

Far more important is that we recognise torture’s every-day efficacy when we take a moral stand against it. Taking defenceless human beings and deliberately inflicting pain on them is a horrific thing to do. Robbery works, and it can tell us that torture works, but it’s not a reason to condone either.

The Moral Argument

Even putting aside the evidence list above, it is simply not good enough to claim that we shouldn’t torture because we have no evidence that it works. It remains a moral argument as follows:

(1) If someone produces evidence that certain methods of torture do work, this is not a reason to take it up as a tool of the community. The reasons not to torture are independent of whether we have thorough proof of its efficacy right now.

(2) There’s a huge number of people who have encountered torture, and as the world population connects up they become closer to us. To claim, loudly and widely in papers and blogs, that ‘we’ believe torture doesn’t work divides the comfortable armchair writers from the people who have ‘been there’. In particular, it shames those who have broken under it, who in my armchair view have no reason to be ashamed . Imagine, if you will, someone who has been through days, weeks, months of such hell and given away names of friends and family and colleagues, and telling them: “Torture doesn’t work”

(3) Most importantly, this blind-eye attitude cheats our ability to think through our ethics, and our ability to take a proper moral stance:

I oppose torture not because some dodgy armchair reasoning tells us it might not work sometimes.

I oppose it because my armchair principles call it vile.

Posted in Politics, Science | Tagged: , , | 7 Comments »