It’s not the PR, It’s the Practice, People
Posted by softestpawn on March 17, 2010
Every now and then, news reaches the mainstream media that scientists somewhere have got something wrong, or behaved in not quite the objective and open manner they pretend to. Sometimes for good reasons, sometimes not.
There follows a public huff about scientists not knowing everything, about how they can’t be trusted. Scientists then discuss how best to ‘fix’ the problems of communicating the uncertainties of science to the public against so-called hostile anti-sciencers while retaining objectivity and authority.
These starting assumptions result in discussions that are founded on very thin air. For a start:
There is no such thing as ‘scientists’
There is a tendency to label a wide variety of communities of researchers as ‘scientists’, and the body of knowledge that they produce (and only they produce) as ‘science’.
We all have some rough idea about what a scientist is, but when you look for real common factors we find the definition either gets so vague that it includes people we wouldn’t normally call scientists, or so specific that it excludes people we would.
To be pedantic, we could take the definition of science – either the thing (a systematic body of knowledge) or the practice (a way of reliably acquiring that knowledge) – and we can include huge swathes of human activity in that. Understanding what is happening in Eastenders counts, as does how to effectively light theatre shows, or what happens around black holes.
Even when we look at what is commonly held as the realm of scientists – academic research in the ‘hard’ sciences of physics and chemistry and biology and so on – we find huge varieties in how science is gathered. Compare the somewhat haphazard work of astronomers discovering the secrets of the universe with the strictly supervised and rigorous controls imposed on those discovering the effectiveness of medical treatments. Compare the apparent working practices of climate scientists with those flight scientists who have to ensure that the new alloys and structures in that airframe will really hold together in all weathers.
The overlapping of expertise in any modern field is far too broad for there to be any single skillset for the experts involved: astronomy for example needs not just ‘ordinary’ astrophysics, but statistical skills to analyse the data, software skills to build the tools to do so, archiving skills to manage petabyte data sets, mechanical engineering skills to commission the instruments. Even within astronomy there is radio astronomy, X-ray, optical, etc, all with very different knowledge sets. Then there are amateur astronomers who offer a very different set of skills outwith the usual academic environment.
It’s a group effort, and the skillsets sometimes fall into what might be seen as professional roles, rather than ‘scientific’.
When evaluating expert opinion we should be clear about quite where the expertise lies. ‘Scientist’ is a title, not a role.
So we have experts from various research fields trying to communicate their findings to the public. But who is this ‘public’ anyway?
There is no such thing as ‘the public’
The assumption that there is a great unwashed of mostly ignorant people who have to be convinced by experts doesn’t properly… …wash.
Amongst the people the researchers are trying to reach are other researchers, for example. Anyone trying to communicate the fascinations of neutron stars, or the structure of fungal spores, will be dealing with nuclear physicists, with biologists, with operations researchers, with historians, with theoretical statisticians, all of whom are also familiar with the academic work environment.
The audience includes people who have to be extremely rigorous about their work. Bridge designers, airframe engineers and supermarket supply chain organisers are all familiar with bringing new theories to the end users in a thoroughly tested and ‘proved’ environment.
It includes people who are quite capable of thinking critically. Commercial statisticians, professional software engineers, private practice psychologists and police detectives all have to apply such practices to their work, whether or not we do in general.
Even the semi-mythical unwashed ignorant masses have come across uncertainty and caveats. Anyone who has bet on football pools is familiar with them, whether or not they understand them properly.
Distrust in ‘scientists’ partly comes from the insistence by some researchers to talk about ‘science’ as a single large body of knowledge, discovered by and propogated through some large community of ‘scientists’. Thus any clear scientific failure by people who call themselves ‘scientists’ will taint all those others who similarly call themselves ‘scientists’.
But more significantly, the audience is quite aware of the way in which many fields do their research, and that way is simply not up to scratch.
The review process is explained earnestly or condescendingly as the shining light of scientific progress, when any reasonable cynic sees it as essentially a large rumour mill. ‘Expert committees’ made up of the very people that it is supposed to be reviewing is pointedly laughable. There are claims that ‘science’ is done openly and in an unbiased and criticism-welcoming scrum of argument, when it is plainly not. And there is some faulty reporting via the history of science that ‘science’ is somehow incremental, that we only improve on previous models and do not go down long, dark, blind alleys.
A “Public Relations” exercise doesn’t fix the basic issues. It could possibly be used to try and cover them up, but few people are really interested in that – there is a genuine general intent to improve the dissemination of knowledge.
We have mechanisms for trust. We trust bridges, buildings, airplanes. We have even understood for a long, long time how to analyse evidence in sparse data sets, and with really quite morally fraught decisions to be made, in the legal courts.
These mechanisms are lacking in some fields for very good reasons: they take up effort, and sometimes we don’t need that kind of reliability. In astrophysics for example, as it doesn’t really affect us directly.
Communication is not about ‘scientists’ talking to ‘the public’, but rather experts talking to an audience that includes other experts. Some of those may in fact be more expert in some subfield than the expert holding forth. If the conclusions are based on shoddy software for example, then an ordinary lowly commercial software engineer is well placed to criticise those conclusions.
More importantly, we need to realise that it’s not just about the presentation, it’s about how the science is done. Trust is earned. The way the work has been done must be right, but also must be shown to be right, and for that researchers who feel that trust is important could do worse than look at human activities where trust is vital, and where PR spin is almost non-existant. Scientific expertise is already brought to a public in ways that it trusts, quietly and without fuss.
In contrast, circling the wagons and ‘protecting’ the work from public scrutiny, in order to defend it against hostile inexpert criticism, has widespread effects when it is occasionally exposed as dodgy. No matter whether the conclusions remain valid.
To reiterate, experts who want to be trusted must not just do the work that brings them expertise, but do extra work to show that they are objective in their assessments, and this includes showing how they do the work, in detail. This exposes that work to all sorts of criticism, much of it ignorant and much of it unwarranted. That means the work needs to be even more thorough and complete. And this too is a Good Thing for that work.
This is increasingly recognised amongst those who call themselves scientists, and implemeting this demonstrated thoroughness will earn trust that few PR campaigns or ‘communication engagements’ can equal.
Because it will be trust in the work under discussion, not transient manufactured trust in the semi-mythical field of ‘science’.