The main thrust of the Precautionary Principle is that we should not do things with uncertain outcomes just in case it goes wrong. This ‘analysis’ is based on the worst case possible imagined outcome, and the burden of proof for complete safety lies with the do-er. It includes attempts to make “look before you leap” a legal requirement, where the look has to be thorough and complete with no counter evaluation of what the consequences are of not leaping.
Essentially, we set up a question where we’re not sure about the outcome (and we’re rarely sure about the outcome of most things) such as “Should we release dihydrogen monoxide?” and build a little table like this:
|It does nothing||It’ll destroy the planet|
|Stop releasing it||Nothing happens||We’re saved|
|Release it||Fine||We all die|
Then we add up the consequences of releasing it or not. And every time obviously the ‘doing it’ is dangerous (we might all die) and the ‘stopping it’ is safer. Right now. Just in case. And so we get The Most Terrifying Video Ever and the like.
Which might sound fairly reasonable, as long as you don’t try thinking any further. It’s based around avoiding bad consequences and so, surely, makes the world a better place.
It’s a very ‘powerful’ idea amongst environmentalists (such as SEHN, JNCC and of course Greenpeace) because it requires taking action now, immediately (“we cannot wait until the evidence is in…”), to stop any unwanted activity, which gives crusaders a cause. It’s about imagining the worst, which is thrilling. It’s about applying fantasy rather than evidence, about emotion rather than evaluation.
Mostly, it’s about stopping people from doing things which might be bad; it makes you righteous and valuable. It’s an excuse to interfere from a hastily built moral high ground without any of the support that science would normally bring. And if no one looks too closely at the thin air it’s built on, the moral high ground is a good place to put your artillery:
Risk Analysis? No thankyou, it’s too hard.
But it’s just an oversimplified risk register, with the question carefully phrased and the outcomes carefully selected to get the required outcome, and it’s very very broken.
It’s far too simplistic. Not doing things is rarely free; not using pesticides makes food more expensive. Banning genetically modified crops denies farmers – including poor subsistance farmers – more productive yields. Not burning coal makes energy more expensive for all, with quite unpleasant knock-on effects through any economy. Locking up people because they might harm others is, I hope, fairly obviously bad. And demanding total safety – where there are no possible unpleasant effects – is impractical.
There are dangerous consequences to many of the things we do, and we have to deal with that. We have tools to analyse them – environmental impact assessments being a broad term for some of them. But these are complicated and tiresome, and the Precautionary Principle is very attractive to those who need a cause to fight without the hassle of working through detail or complexity.
It’s Broken. Obviously so.
Consider phrasing the question the opposite way around. Remember we’re not sure what the outcome is; so what if not releasing dihydrogen monoxide into the atmosphere has a bad effect? After all, it may be preventing something else:
|It does nothing||It’s preventing disaster|
|Stopping||Nothing happens||We all die|
|Release it||Fine||We’re saved|
The burden of proof also lies with those who would stop an activity, if doing so has bad consequences.
“We cannot wait for all the evidence”; indeed, all the evidence is rarely available. But we may need to gather sufficient evidence before we make things worse.
Try applying it to whether you should get out of bed tomorrow morning: you shouldn’t, because all manner of bad things might happen if you do. It’s not clear what the risks are. But then apply it to whether you should stay in bed all day tomorrow: you shouldn’t, because all manner of bad things might happen if you do. It’s not clear what those risks are.
In fact, it’s so broken that if you apply it to itself it can, if phrased correctly, tell you not to:
|PP is useless||PP will give you the wrong answer|
|Use the PP||Nothing||Wrong answer!|
|Don’t use it||Fine, time saved||Nothing|
The burden of proof that using the Precautionary Principle is safe, or even useful, has yet to be established, so you shouldn’t. Which is impressive irony.
It’s broken for every do/don’t decision for wifi, MMR jabs, mobile phone masts. It’s broken for everyday activities like getting out of bed or going to work or sitting in the pub or eating food or staying in bed. It’s broken for thought experiments like whether aliens in clouds. It’s broken for real but low risks like asteroid strikes. It’s broken for itself.
It only ‘works’ if you apply it to something you already believe is true. And it’s only ‘useful’ when you don’t have any real evidence, or you wouldn’t be able to phrase it that simply.
Risks are part of our everyday lives; we need to understand them, how to mitigate them, how to plan for them. Applying the Precautionary Principle to eliminate them entirely means you have to spend the rest of your life in bed. Or not.