SoftestPawn’s Weblog

Oh no, not another blog!

Archive for the ‘Bad Journalism’ Category

BBC’s (Im)partial Science Reporting

Posted by softestpawn on September 23, 2010

The BBC is holding another review on its impartiality, this time on how it presents scientific subjects: Science impartiality review – terms of reference (PDF). It has existing guidelines, and it has held such reviews before on how it reports on subjects such as religion and the middle east. This is all Good Stuff, as the BBC’s reputation rests somewhat on the quality and reliability of its reporting, and reliability requires, among other things, impartial reporting. 

One of the many frustrations for medically trained scientists however is the airtime and article space given to ‘alternative’ treatments such as homeopathy, reiki, accupuncture and so on. These are treatments that have not passed the objective tests used to identify those that actually work. These tests (double blinded, randomised control groups, etc) are meant to bypass the personal and social prejudices and biases that affect our abilities to properly evaluate effectiveness. They do not always succeed.

The concern is largely that by giving publicity to unproved, useless and sometimes dangerous treatments, the BBC lends them credibility and authority, and so more people may be taken in by them.  By providing BBC publicity to such sites as JABS, people may believe them to be officially sanctioned.

And so these concerned people do not want the BBC to give equal space to these cranks, charlatans and quacks. Such reporting is not truly balanced, they claim. If you’re going to report science, they say, you should report scientific science not pseudoscience.

Scientific science vs pseudoscience

Which all sounds well and good, but the BBC does not have the funds or indeed the expertise to properly evaluate every controversial issue.  For a start, only a few controversies can be tested in the clearly objective way that medical treatments can. 

The BBC may instead decide to defer all evaluation to certain establishment scientists and report only the expert opinions of people with certain qualifications from certain institutions; but this is not scientific. It’s not uncommen for academic research scientists to fall prey to their own or others pseudoscience, even in related fields.

Nor does the BBC have the remit to make such evaluations or deferrals. A public controversy is one with many people who believe opposing things, for frequently unscientific reasons, and the BBC’s audience is public. If the BBC were to fail to report the views of such people and how they were derived, then it is failing to engage with or inform the discussion.

The concerned may argue that such a discussion is not a scientific one: a programme on ghosts has no place under the Science label for example. Yet the evaluation of sparse evidence is vital to science; a negative result is still a useful result. And we need not be sheltered from uncertain and ambiguous evidence, leaving us to make up our own minds – this too, is science. 

Impartial to the audience, not the evidence

Impartiality is not the same as correctness. The BBC can and should provide time to the different parties in a discussion that the general public is interested in.

This doesn’t mean having to give airtime to any old crackpot view, but if large proportions of the public are, say, worried about vaccinations then it is quite right of the BBC to air those concerns along with objective evaluations of them. The BBC rightly provides a platform for those advocates to present their case to the public, for the public to evaluate. 

The public – everyone – is indeed ignorant and stooopid about most subjects (who has time to evaluate everything?). But to be protected from our own folly and expertise by filtering what is presented to us leaves us in the hands – and frequently inexpert opinions – of those doing the filtering. 

So yes, let’s have links to sources so we can check back and do our own evaluations. Let’s have more entertaining educating articles and programmes such as those from More-or-Less and Ben Goldacre. And let’s have more time to hear the cases rather than have them forced into small soundbites. 

But let’s not start letting partisan groups decide on our behalf what we should hear about when it comes to science topics. Because that’s really not good scientific practice.

From Stuff and Nonsense &  DC’s Improbable Science, although the review started back in March. A cutdown version of this has been sent to the BBC’s feedback email:


Posted in Bad Journalism, Evidence Based Beliefs, Science | Tagged: , | Leave a Comment »

Fool on 4

Posted by softestpawn on October 15, 2009

File on 4 from the BBC have an episode on military procurement: “Gerry Northam asks why it seems so hard to buy the right equipment for our forces.”

Now there are many many problems with the UK military’s procurement process, and I hope to do a series on it, but this BBC ‘investigation’ completely failed to get past the symptoms, and instead blindly carried on with the short sightedness that drives some of those problems.

False economies of scale

Early on is the assertion that if all the army had the same vehicle, with the same engines and the same spares, think how much easier that would be to manage.

Which is only true in the most simple of ideal worlds: how much easier it would be to maintain computers, or showers, or cars, if they were all the same.

It is true, but it misses some major issues: (1) we want the right kit for the job, not some generic not-quite-good-enough-for-anything (2) circumstances change, and we can’t expect to upgrade everything everywhere all the time and (3) quite apart from anything else, in practice we have to buy what we can get, and that might mean buying some Jackals, some Panthers and some Mastiffs (or Cougars – what is it with the predatory animals?)

That means we will have different kit in different places, some of it rightly modified by local engineers to fit the local problems. Trying to enforce some centrally dictated and lengthy equipment procurement diverts effort from developing systems to support the real world, and even undermines them.

Soldiers are at put at risk because of…

The piece leaned heavily on soldiers who had been killed “due to” some problems with the equipment, leaving interviewees to try and start from scratch to explain problems with such a single-minded approach.

Soldiers – like many other professions – work in an inherently risky environment, and any who die can be used as an argument that better armour, better firepower, better gadgetry, better speed, better communication, better transport and so on might have saved their lives, while completely ignoring any of the costs that arrive with those improvements. Such as the problems of having more armour on the mobility or reliability of a vehicle.

Most decisions made by high level management are going to result in the deaths of soldiers in a warfighting environment; the costs must be balanced with the effects required. For example recently efforts are going into clearing houses by hand, rather than by dropping large explosives on them, thus preserving the property and shelter of local civilians. This results in more immediate risk and so more soldier deaths, but with the intended benefit of longer term peace. Any reporter criticising it with only “Soldiers die because of this!!!” should be ignored for extreme naivity.

The Vector’s failing wheel hubs

Pinzgauer_Vector_sGerry is quite right to point out the failing points of the Vector (generally the wheel hubs that failed under the weight of the armour and the ground conditions) but then falls apart when doesn’t think through his follow on questions: “Did nobody think of it?” as if somehow hindsight should have been obvious in forsight.

Testing and evaluating the complete drivetrain of a vehicle for a variety of terrains and loads is not a ten minute job, and takes many months at best. If demands change then the vehicle would need to be re-tested and might never leave the testing ground.

Which is the main point missed; Gerry compares the failing Vectra with some mythical perfect vehicle, not the real world alternative: the Snatch Land Rovers (or indeed on the fairly bog standard unarmoured Land Rovers and Bedfords we had). The military bought Vectors because they needed something quickly, and they got it. It wasn’t right, but it was better than what they had, and when they failed the military resorted to the Land Rovers – until the better Mastiff turned up. A more interesting (but still hopelessly naive) question, might be: “Why didn’t they already have the perfect vehicle before we went to war?”

The top-heavy Puma

Again, the complaints are that there is something wrong with the existing kit, and how it should not be in service because it’s not right. Which would mean that more time should be spent developing equipment to a high level of safety, and while that is happening the troops have to wait.

The Mastiff Supply Chain

We get a little closer to interesting supply chain problems with the Mastiff, which had axle failures, and replacements were not available as the manufacturer’s priority was the American Army it was supplying as well, not the British.

Get me it now. And get me the fixes soon after

Programmes like these increase the political pressure for zero-risk purchases.

We should (in my obviously very humble yet quite right view, ahem) be looking at ways of informing those who need the equipment (the soldiers) with the problems in establishing supply chains, rather than trying to train more remote desk-bound procurement beaurocrats with what is required on the front line. With that is the key requirement that changes can be made; adaptations to the deployed equipment and new equipment quickly brought in to fill gaps or failures.

Posted in Bad Journalism, Military Procurement | Tagged: , , , , , | 1 Comment »