SoftestPawn’s Weblog

Oh no, not another blog!

Posts Tagged ‘Science’

Time to Pop Popper’s Popularity

Posted by softestpawn on October 26, 2012

The modern solid sciences are based around the concept of ‘falsifiability’; that theories and hypothesis have to be ‘falsifiable’ in order to be scientifically valid.

Unfortunately there is little if any introspection in the solid sciences (that is left to beardy, sandally philosophy-of-science types, and what do they know of real science?) so solid scientists rarely reflect upon whether this falsifiability is a good, logical, or even scientific approach to research. And frankly, it’s not. This is embarrassing for me as I too, in common with many other from that tribe, have smugly declared that only through hypothesis testing can we do rigorous science. I wonder now, looking back, on what an arse I must have appeared to those with a more, well, scientific approach to science.

This “falsifiability” approach is essentially derived from Karl Popper’s thoughts on provability, given for example in his book “The Logic of Scientific Discovery”. Researchers (or ‘scientists’ or ‘philosophers of science’ or perhaps just ‘curious people’) in the late 19th and early 20th century were struggling with how theories and statements about the world can be supported or otherwise by facts. Popper’s book should really be seen as part of the discussion around concepts of proof rather than as the conclusion about scientific investigation that it has become in some quarters.

Popper started by ‘demarcating’ research: by categorising disciplines into things he thought were science (astronomy, physics) and those he thought were not (astrology and… psycho-analysis…) and then looked for common themes in each to define what makes research scientific and what makes it not. As an initial poke at a subject, looking for common themes is interesting, but as a scientific one it is appalling. It depends heavily on rather personal decisions about which discipline is scientific, and results in a circular rather than scientific argument: ‘I think these things are Science, therefore the way they do work must be scientific, and because they work that way therefore they are objectively Science’. In fact we can’t easily tell whether these disciplines have provided (or not) useful theories about the world because of their practices at the time or in spite of them.

His key conclusion from this personal demarcation was the statement that we can only logically safely deduce statements, not adduce or induce. That is, we cannot prove that theories are generally true, or even true for any untested range; we can only disprove a theory when we discover facts that contradict it. Therefore good theories are ones that can be contradicted.

That’s logically sound, and all very well, but the point about research is we want to find theories that predict. We want to be able to understand what will (probably) happen if we do something we have not done before. We want some idea of confidence in the untested areas of a theory. Deduction is nearly useless.

The result is the pointless and distracting ‘null hypothesis’ introduced to modern experiments. Because you can only ‘disprove’ theories, when Popperians come up with a new theory, they have to invent a ‘null hypothesis’ to have something to disprove in their experiments. Disproving this somehow ‘supports’ the experimenters hypothesis. In fact the null hypothesis has no logical value and its disproof can give a false impression. For example, if you have a theory that a new teaching method can improve student’s reading speed, the ‘null hypothesis’ will be that there is no difference. Now almost any experiment is likely to make some difference to student’s reading speeds, so the null hypothesis is nearly always disproved (there are better ways of framing this particular experiment, but they all revolve around trying to fit around a null hypothesis that has no value except to mark the work as ‘scientific’. Confidence Intervals would be better and more, well, scientific)

What we need, and is currently done with a rather ‘common sense’ rather than rigorous approach, is a systematic approach to understanding what parts of a theory we can be confident in (and for what degrees of confidence) over what ranges.

I’ll be back to you on that…

…possibly…

Advertisements

Posted in Science | Tagged: , , , | 1 Comment »

Most Scientists do Science

Posted by softestpawn on May 10, 2010

Following the letter in Science magazine, from scientists defending scientists when they should really be defending science, there was this editorial from Brooks Hanson.

Brooks makes a few points far more politely than I do about some of the shortcomings in some disciplines, and this one I think is particularly pertinent:

“The scientific community must recognize that the recent attacks stem in part from its culture and scientists’ behavior”.

But it follows the same odd cultural attitude that permeats much of research, that conflates ‘scientists’ with science – that lumps up researchers with the body of knowledge that society’s technology and wealth and health relies on.

What is this scientific community? Is he talking about academia here? In which case part of the recent attacks do indeed stem from some attitudes and behaviours of some academics. And when other academics leap to defend such people, calling on vague mythical ‘scientific processes’ (or ‘peer review’, or ‘intellectual scrums’) that are supposed to make academics trustworthy, then yes the tarring brush gets bigger.

But Brooks takes this much further and tries to include everyone. In particular I take exception to this:

Scientists must meet other responsibilities. The ability to collect, model, and analyze huge data sets is one of the great recent advances in science and has made possible our understanding of global impacts. But developing the infrastructure and practices required for handling data, and a commitment to collect it systematically, have lagged. Scientists have struggled to address standardizing, storing, and sharing data, and privacy concerns.

Actually, some scientists (astronomers, medics, gene researchers, taxonomers, seismologists, simulation writers, geologists, say) have long developed the infrastructures and practices and standards required. In some cases by taking those methods from the body of knowledge (the ‘science’) that exists in the commercial world.

Perhaps Brooks is looking for a way of spreading the blame around, of not picking on particular teams. But it hampers the very public perception that he says he’d like to improve, as it lumps all scientists up under one theoretical culture which has been tainted by the actions of a few, but which doesn’t exist everywhere.

In fact, it may well only exist in a very few remaining places, and it’s not right to pass off poor working practices of some groups onto others without cause, especially if they only happen to share a similar job title.

Posted in Science | Tagged: , | Leave a Comment »

Some Scientists Fail At Science. Again

Posted by softestpawn on May 10, 2010

Quite an extraordinary letter (Climate Change and the Integrity of Science) was posted in the May 7th issue of American Science Magazine. The extraordinariness of it lies in its incompetent defence of ‘science’, by people supposedly qualified to do it.

I will assume, for the moment, that there is indeed “compelling, comprehensive, and consistent objective evidence” for human-caused global warming, somewhere in the scientific community. That doesn’t affect this letter’s incompetence.

(There are a couple of ordinary not very important mistakes to snigger at: such as that the authors felt that a letter to a subscription-only magazine was somehow ‘open’ (although it is now public). Or that the associated picture of a lonely polar bear was photo-shopped; the marooned polar bear as a symbol of catastrophic global warming may well backfire, but they look cute. From a distance).

Authority

The paper is given with 255 signatories, and this is supposed to mean something. It is not, in other words, supposed to stand on its own. It is an ‘argument from authority’; there are lots of scientists, they know stuff about science, we should believe them.

But of course the recent publicity tells us that we can’t always trust scientists, so this letter seems a little redundant. They have missed the point.

They also have no authority on the subject; few if any are climatologists, and this makes them about as competent to pronounce on it as you or I or anyone else. Just because someone is employed by a university does not make them scientific. Or knowledgeable. Or, it seems, familiar with some basic good working practices. Or a reasoned argument…

Certainly it’s not certain

The authors say the public should not wait for certainty, because certainty is impossible, which is all fine by itself. But it’s also known as a ‘strawman’, in that you set up an argument that has not been made against you, demolish it, and claim that therefore you are right. Few if any skeptics expect or ask for certainty.

What is missing is any acknowledgement of what uncertainty means, thus demonstrating that it is the authors who haven’t understood the problem. Somehow, by saying that we should not wait for certainty, they conclude that we should therefore take expensive action because there is an unspecified ‘dangerous risk’. This argument that lack of proof means… proof that we should act, is good Precautionary Principle eco-mentalism, it’s not scientific.

The Scientific Process

We see, again, the mythical ‘scientific process’ that is somehow self-correcting – in fact, the authors claim it is designed to find and correct mistakes: the intellectual scrum of adversarial open honest and technical argument provides magical automatic quality assurance. Since the ‘political assaults’ the authors complain about are largely to do with enforcing openness and honesty, this comes across as a bit…. ignorant.

Without additional assurance techniques (such as blind tests), for the ‘scrum’ to work the scientists must be honest and intellectually rigorous and people are pretty rubbish at this, climatologists being no exception. The scrum – by itself – has about as much merit for developing and aggregating ideas as my local pub. Which is to say that it’s not completely useless, but it’s certainly not very good.

And the main public concerns is not that the climatologists need better PR, or better protection from evil vested interests, but that they need to demonstrate that the processes they use really do find and correct mistakes.

Facts

Another extraordinary failure in appreciating how research fits into the real world is shown in claims that once things have been “deeply tested”, “questioned” and “examined” then they become ‘facts’.

There’s not even a recognition that this is not an either/or discrete event, such as quite how much “questioning” for example gives you confidence in the ‘factiness’ of something, and how far along this the various aspects of climate science have reached.

Instead they point at some commonly held theories (big bang, evolution) that have gone through some quite remarkable changes, and are by no means as fixed as climatology would need to be ‘compelling’ evidence of impending fossil-fuelled catastrophe.

The Vested Adversarial

The circled wagons mentality of a community suddenly under direct and personal public view seems to be driving a conspiracy mentality – that criticism must be driven by ‘vested interests’ and ‘dogma’, as if this somehow provides a reason to make the criticism invalid.

The adversarial approach that is supposed to make the scientific process work shouldn’t be applied, it seems, if someone has a reason to want to be an adversary. Whereas as any fule kno thrashing out an idea properly needs people who disagree strongly and who have a motivation to do so, not with your friends, or people who would benefit from it.

It is indeed worrying to the advocates if the ‘science’ is so sensitive to a few mostly retired enthusiasts. This is the science that is supposed to be robust in the intellectual scrum of attacks and criticisms by fully qualified, informed and trained researchers, and yet needs protecting from ‘special interests’.

That Alternative Thing Again

Another naughty unscientific assertion is that adversaries should provide alternative explanations if they are to challenge dogma consensus. This is only sort of true when the dogma has been demonstrated in the first place.

Making stuff up and then challenging people to disprove it has no place in applied science, as you would know if you considered the little pixies painting the backs of your ears. No, really, they are. Prove they haven’t or it must be true.

Or clouds, you know, they move about because aliens use them as mobile homes. No, really, they do. Do you need theories about wind and water vapour condensations to challenge that? No, you don’t.

The authors claim that major ideas can and will be challenged, because fame and – well, just fame – await the successful challenger.  This ignores all the ordinary hurdles that limits challengers to members of very small communities in the way of funding, data access, resources, social networks, staff. It also implies that because scientists apparently eagerly court controversy, and since anyone who could show failures in theories would have done so, therefore they must be right in near enough entirety to be ‘comprehensive’. Which isn’t true, rational or even logical, captain.

Errors

The authors claim that when errors are pointed out in, say, the IPCC reports, then corrections are made. Which they are – sometimes. Corrections to the famous himalayan glacier melt cock up were resisted, despite being straightforward to check. The well-funded IPCC chair Patchuri didn’t just challenge the correction, he deliberately insulted the researcher

If it came from the ‘scientific process’, then the ‘vested interest’ skeptics that pushed and publicised that error are part of that process – which of course, they are. We shouldn’t get too excited about a few mistakes, but we also shouldn’t get too excited by the motivations of the people pointing out errors, if we’re at all serious about the science.

And…. action!

The conclusions – the call to action – claim that society has two options: to “hide our heads” or “act… to reduce the threat”.

Which has the intellectual merit of a mouldy damp teabag. There are of course a whole range of options, and most of them are continuous distributions of ‘how much’ we will do of various interrelated activities, not discrete ‘either/ors’.

The hypocrisy

By pretending that motivations are special interests or dogmatic, you can ignore the evidence presented of mistakes or wrong doing. This is known as ‘denial’.

It’s also the very “innuendo”, “guilt by association” and “outright lies”, that the authors complain about. Just because, for example, someone gave a talk at an organisation that once received some funds from an oil company, does not mean they are wrong. This is known as ‘smearing’.

The authors talk of “McCarthy-like threats of prosecution” when the recent spate of public prosecutions and legal wrangling have been based on poor scientific work practices of those researchers.  No examples – no evidence – are given of unfair prosecutions. The authors want immunity, it seems, from law, without providing any demonstrated duty of care in return.  This is known as ‘distraction’

And it hasn’t a patch on attempts to prosecute people for not believing them, or to legally eradicate climate skepticism. As with using the term ‘denier’, or pointing at funding routes, or complaining of vested interests, or casting innuendo, so these attempts to criminalise challengers are backfiring as the skeptics turn to law to enforce good practice. This is known as ‘just desserts’.

Posted in Science | Tagged: , , , , , | 1 Comment »

Barriers to Open Research

Posted by softestpawn on December 7, 2009

One of the many so-called ‘requirements’ of good scientific research is openness, and the mythical welcoming of criticism that will result.

Humans don’t like doing this and tend to have to be forced to, through rigorous process, reams of paperwork and checks, and evil picky enforcers who come around and badger you to follow the process that not only means you have cleaned out the rotating cell-nutrienting widget breeder, but that shows that you have done so.

Full disclosure is a basic requirement of openness. Such disclosure is not necessarily public; it just means you have to show your working – all your working – so that other people can come along and check it.

Importantly, it also means showing it to people who will criticise it. That’s the point. If you just show it to your mates, there’s all sorts of social conventions that get in the way of proper criticism.

You need to show it to your rivals. To those who really will take it apart piece by piece. Otherwise you’re not really subjecting it to proper scrutiny. You ready for that? You want to do it? eh? eh? Then you’re a Proper Scientist, doing Proper Science. You Urban Spaceman you.

Thoroughness takes time

Because by making it ready for your competition to look at, you will run it through every possible check you can think of, that your mates can think of, your colleagues, boss, drinking companions, any handy six year old child. That’s a lot of work. It’s effort that takes you away from ‘real’ research. And it’s not fun – who likes writing up?

The Reputation

And you might even find stuff that’s wrong, with all the implications that has for your reputation and career and funding, let alone the extra work to fix it. It’s so much easier to just publish the results and ‘lose’ the rest of it… maybe no-one will notice… …perhaps for years …maybe not until you move on.

The Confusion

And besides which, if you publish it people might misread it. Misrepresent it. Those who aren’t experts like you might get ‘confused’ about your conclusions. They might ‘confuse’ others.

As an aside I do find the bandying about of ‘misinformation’ and ‘confusion’ by the more extreme climatologists quite ironic. All information can of course also be misinformation, and any complex subject is likely to be confusing. That they feel they can declare what is information and clear while denying open scrutiny is hubris not expertise.

It’s spectactularly ironic given the repeated use of the term ‘consensus’ – a piece of “misinformation” as it’s badly defined, not properly measured, and “confusing” as it’s not even relevant. 

The Education

A mostly sensible comment via  stoat’s blog brings out another problem with publishing more than you have to: the more you publish, and the more publically, the wider the potential audience of less expert people who will want to poke at your data and ask about it.

Though to use that as an excuse to deny FOI requests for data and code, rather than lessons, is ‘misleading’ to put it mildly.

And to complain that you’re receiving lots of requests for the evidence for something you claim is globally vital for the survival of the human race, seems a little, well, disconnected.

The Conclusion

So there are no real incentives to be open and thorough, and plenty against it. In some industries the long term benefits (ie, showing you’ve got it right) have been recognised and are forced onto unwilling staff, to a greater or lesser extent.

We have seen the effects of not having this imposed on the CRU team; the comments in Harry’s Readme (if it’s real) demonstrate quite how poor the work is understood even internally.  If these had to be prepared for external release, just think of all the extra work required to check everything rigorously. Of course, if it already had been openly published (even not publically), we wouldn’t see comments like Harry’s. Poor chap.

So what does this tell us about other climate research? Nothing really. That’s the point.

Posted in Environmentalism, Global Warming, Science | Tagged: , , | 3 Comments »

Humans Don’t Do Science

Posted by softestpawn on September 27, 2009

“Science”, we are frequently and rightly told, relies on being open about what it is doing and how, on welcoming informed criticism, on being willing to drop discredited ideas, on experimenting to test theories to try and break them, and generally progresses by proving existing ideas are not (quite) correct.

(This assumes a modern somewhat subverted meaning of ‘science’ which will purple the pedants, but it will do for now. As will this:)

‘Scientists’ are those who do the research that brings us more and better science.

The implication then, is that as science needs the above to work, and scientists do science, therefore scientists are open, willing to drop their concepts when discredited, welcome informed criticism, and so on.

Wot tosh.

Humans eh?

Scientists are human, and so are as selfish, greedy, proud, sociable, sensitive, prejudiced, noble, dislikable, charming, arrogant when given half a chance, and generally as emotionally involved as other humans. And some scientists seem to be unaware that this breaks the requirements to ‘do science well’.

It should surprise nobody that people get emotionally involved in their work, especially if it requires a lot of effort, some specialised skills, and the results look good and are valued. This applies to most of us, and it applies just as well to a scientist who has developed a respected theory. Nobody welcomes criticism of work they are proud of.

Reputations are based on theories and ideas too, not directly on rigour in the workplace. Newton is remembered for his observations of motion (and a mythical accident for discovering them), not because of his work practices.

Similarly sometimes a huuuge amount of time, effort and money and reputation is invested in developing certain concepts, and few people can be objective when assessing their own life’s work. Skills and knowledge are accumulated and not willingly abandoned. Dark matter, neutrinos, the search for the Higgs Boson, for example, are all current research programmes that might turn out to be a complete waste of time, but there is a tremendous momentum in pursuing those particular concepts.

And so sometimes there are quite large communities of people that are emotionally invested in certain concepts. Since funds are often limited, these communities can be quite large proportions of the overall field (The CERN experiments suck up quite a lot of the physics community’s funds). If we ask certain slices of the research community what the ‘consensus’ is on a topic, the results are biased by the various investments of these communities.

Iterative Steps

Of course the key here is that we are looking at research, where we are investigating things we don’t know very well. As soon as we run a proper experiment to test the theory, then the people-yness of those involved becomes nearly irrelevent.

In the meantime we can perhaps rely on the ‘iterative’ nature of science; that we can count on the overall continual reviews to eventually correct mistakes and improve on theories. This, though, is not a set of incremental improvements, where we gradually work our way closer to the ‘truth’ in the manner of many mathematical iterations. Some models have to be completely abandoned, not just improved on.

Such a messy approach is perhaps fine for general research, but is insufficient if we need to act upon it. In some cases (such as education, climate change, materials to build bridges, buildings and airplanes) we need to assess what we actually know, and know now, from amongst all the people-y assumptions and reputations and opinions.

Being Scientific

One of the key aspects of really scientific disciplines, including ones outwith research, is that we remove the people-yness of those involved as much as possible.

For example, we record all the data and methods (“audit”) because we expect to make mistakes, and so we need to be able to go back and check every step.

We let others have access to this (“full disclosure“) because, again, we expect to make mistakes, and so we need to let other people check every step. It also helps to compensate for some of the ordinary people problems; if you know the details of your work are going to be scrutinised by all and sundry, you tend to be much more careful with that work, and much more careful with drawing conclusions from it.

We run formal assessment reviews to check methods, data sources, and citations.

Where possible, experiments are designed to remove ordinary personal biases, such as the ‘double blind trials’ used to test whether medical treatments work.

These extra tasks are tracked and checked and recorded, to make sure they are done.

Except that we don’t even do all these very well. It’s expensive, and it diverts effort from the task (even if it improves the quality of knowledge overall), and so we tend to bypass them when we can. It only tends to be properly implemented where we need very very high levels of confidence, such as medicine and bridges, buildings and airplanes, and are willing to pay for it.

It’s an odd leftover from the past that we don’t require the same rigour for informing public policy, such as in education, re-employment, and major environmental impacts.

And when scientists from some of the more ‘careless’ disciplines hold forth, we ought to consider carefully whether their views have been as openly, rigorously and systematically checked as they imply – or even believe themselves.

(“The Golem: What you should know about science” is a much more thorough take on the above)

Posted in Politics, Science | Tagged: , , , | Leave a Comment »