Humans Don’t Do Science
Posted by softestpawn on September 27, 2009
“Science”, we are frequently and rightly told, relies on being open about what it is doing and how, on welcoming informed criticism, on being willing to drop discredited ideas, on experimenting to test theories to try and break them, and generally progresses by proving existing ideas are not (quite) correct.
(This assumes a modern somewhat subverted meaning of ‘science’ which will purple the pedants, but it will do for now. As will this:)
‘Scientists’ are those who do the research that brings us more and better science.
The implication then, is that as science needs the above to work, and scientists do science, therefore scientists are open, willing to drop their concepts when discredited, welcome informed criticism, and so on.
Scientists are human, and so are as selfish, greedy, proud, sociable, sensitive, prejudiced, noble, dislikable, charming, arrogant when given half a chance, and generally as emotionally involved as other humans. And some scientists seem to be unaware that this breaks the requirements to ‘do science well’.
It should surprise nobody that people get emotionally involved in their work, especially if it requires a lot of effort, some specialised skills, and the results look good and are valued. This applies to most of us, and it applies just as well to a scientist who has developed a respected theory. Nobody welcomes criticism of work they are proud of.
Reputations are based on theories and ideas too, not directly on rigour in the workplace. Newton is remembered for his observations of motion (and a mythical accident for discovering them), not because of his work practices.
Similarly sometimes a huuuge amount of time, effort and money and reputation is invested in developing certain concepts, and few people can be objective when assessing their own life’s work. Skills and knowledge are accumulated and not willingly abandoned. Dark matter, neutrinos, the search for the Higgs Boson, for example, are all current research programmes that might turn out to be a complete waste of time, but there is a tremendous momentum in pursuing those particular concepts.
And so sometimes there are quite large communities of people that are emotionally invested in certain concepts. Since funds are often limited, these communities can be quite large proportions of the overall field (The CERN experiments suck up quite a lot of the physics community’s funds). If we ask certain slices of the research community what the ‘consensus’ is on a topic, the results are biased by the various investments of these communities.
Of course the key here is that we are looking at research, where we are investigating things we don’t know very well. As soon as we run a proper experiment to test the theory, then the people-yness of those involved becomes nearly irrelevent.
In the meantime we can perhaps rely on the ‘iterative’ nature of science; that we can count on the overall continual reviews to eventually correct mistakes and improve on theories. This, though, is not a set of incremental improvements, where we gradually work our way closer to the ‘truth’ in the manner of many mathematical iterations. Some models have to be completely abandoned, not just improved on.
Such a messy approach is perhaps fine for general research, but is insufficient if we need to act upon it. In some cases (such as education, climate change, materials to build bridges, buildings and airplanes) we need to assess what we actually know, and know now, from amongst all the people-y assumptions and reputations and opinions.
One of the key aspects of really scientific disciplines, including ones outwith research, is that we remove the people-yness of those involved as much as possible.
For example, we record all the data and methods (“audit”) because we expect to make mistakes, and so we need to be able to go back and check every step.
We let others have access to this (“full disclosure“) because, again, we expect to make mistakes, and so we need to let other people check every step. It also helps to compensate for some of the ordinary people problems; if you know the details of your work are going to be scrutinised by all and sundry, you tend to be much more careful with that work, and much more careful with drawing conclusions from it.
We run formal assessment reviews to check methods, data sources, and citations.
Where possible, experiments are designed to remove ordinary personal biases, such as the ‘double blind trials’ used to test whether medical treatments work.
These extra tasks are tracked and checked and recorded, to make sure they are done.
Except that we don’t even do all these very well. It’s expensive, and it diverts effort from the task (even if it improves the quality of knowledge overall), and so we tend to bypass them when we can. It only tends to be properly implemented where we need very very high levels of confidence, such as medicine and bridges, buildings and airplanes, and are willing to pay for it.
It’s an odd leftover from the past that we don’t require the same rigour for informing public policy, such as in education, re-employment, and major environmental impacts.
And when scientists from some of the more ‘careless’ disciplines hold forth, we ought to consider carefully whether their views have been as openly, rigorously and systematically checked as they imply – or even believe themselves.
(“The Golem: What you should know about science” is a much more thorough take on the above)