Time to Pop Popper’s Popularity
Posted by softestpawn on October 26, 2012
The modern solid sciences are based around the concept of ‘falsifiability’; that theories and hypothesis have to be ‘falsifiable’ in order to be scientifically valid.
Unfortunately there is little if any introspection in the solid sciences (that is left to beardy, sandally philosophy-of-science types, and what do they know of real science?) so solid scientists rarely reflect upon whether this falsifiability is a good, logical, or even scientific approach to research. And frankly, it’s not. This is embarrassing for me as I too, in common with many other from that tribe, have smugly declared that only through hypothesis testing can we do rigorous science. I wonder now, looking back, on what an arse I must have appeared to those with a more, well, scientific approach to science.
This “falsifiability” approach is essentially derived from Karl Popper’s thoughts on provability, given for example in his book “The Logic of Scientific Discovery”. Researchers (or ‘scientists’ or ‘philosophers of science’ or perhaps just ‘curious people’) in the late 19th and early 20th century were struggling with how theories and statements about the world can be supported or otherwise by facts. Popper’s book should really be seen as part of the discussion around concepts of proof rather than as the conclusion about scientific investigation that it has become in some quarters.
Popper started by ‘demarcating’ research: by categorising disciplines into things he thought were science (astronomy, physics) and those he thought were not (astrology and… psycho-analysis…) and then looked for common themes in each to define what makes research scientific and what makes it not. As an initial poke at a subject, looking for common themes is interesting, but as a scientific one it is appalling. It depends heavily on rather personal decisions about which discipline is scientific, and results in a circular rather than scientific argument: ‘I think these things are Science, therefore the way they do work must be scientific, and because they work that way therefore they are objectively Science’. In fact we can’t easily tell whether these disciplines have provided (or not) useful theories about the world because of their practices at the time or in spite of them.
His key conclusion from this personal demarcation was the statement that we can only logically safely deduce statements, not adduce or induce. That is, we cannot prove that theories are generally true, or even true for any untested range; we can only disprove a theory when we discover facts that contradict it. Therefore good theories are ones that can be contradicted.
That’s logically sound, and all very well, but the point about research is we want to find theories that predict. We want to be able to understand what will (probably) happen if we do something we have not done before. We want some idea of confidence in the untested areas of a theory. Deduction is nearly useless.
The result is the pointless and distracting ‘null hypothesis’ introduced to modern experiments. Because you can only ‘disprove’ theories, when Popperians come up with a new theory, they have to invent a ‘null hypothesis’ to have something to disprove in their experiments. Disproving this somehow ‘supports’ the experimenters hypothesis. In fact the null hypothesis has no logical value and its disproof can give a false impression. For example, if you have a theory that a new teaching method can improve student’s reading speed, the ‘null hypothesis’ will be that there is no difference. Now almost any experiment is likely to make some difference to student’s reading speeds, so the null hypothesis is nearly always disproved (there are better ways of framing this particular experiment, but they all revolve around trying to fit around a null hypothesis that has no value except to mark the work as ‘scientific’. Confidence Intervals would be better and more, well, scientific)
What we need, and is currently done with a rather ‘common sense’ rather than rigorous approach, is a systematic approach to understanding what parts of a theory we can be confident in (and for what degrees of confidence) over what ranges.
I’ll be back to you on that…