It seems we have lost the ability to learn from our past mistakes when it comes to protecting the environment and health. This is the depressing conclusion from the report Late Lessons from Early Warnings: Precaution, Science, Innovation published by the European Environment Agency (EEA) last year.

Learning from past mistakes in order to avoid repeating them is something that we normally associate with human intelligence. However, when it comes to regulating the environment and health risks associated with emerging technologies such as nanotechnology, biotechnology and geoengineering, we make the same mistakes again and again.

In 2001, the EEA published the report Late Lessons from Early Warnings: The Precautionary Principle 1896-2000. This examined 14 historical cases where decision-makers did not apply the precautionary principle and ignored early warnings of hazards. Cases included asbestos, PCBs and ozone depletion.

In brief, the precautionary principle commits decision-makers and regulatory authorities not to use scientific uncertainty as a reason for not implementing regulatory and mitigation measures. The conclusion of the EEA report was that decision-makers had ignored, not just early warnings, but also “serious and late” warnings and that this failure to act had been both very costly financially and had resulted in many unpredictable environmental and health consequences.

The authors summarised their recommendations in 12 ‘Late Lessons’ that future decision-makers should bear in mind:

  1. Respond to uncertainty
  2. Provide long-term monitoring
  3. Addresses gaps in knowledge
  4. Reduce interdisciplinary obstacles to learning
  5. Ensure that real world conditions are adequately accounted for
  6. Review the alleged benefits / risks critically
  7. Evaluate alternatives
  8. Ensure use of ‘lay’ and local knowledge
  9. Take account of the assumptions and values of different social groups
  10. Maintain regulatory independence
  11. Reduce institutional barriers to learning and action
  12. Avoid “paralysis-by-analysis”

In February 2013, the EEA published the long-awaited follow-up to its first report. As with their 2001 report, the report examines a series of case studies such as lead in petrol, mercury pollution in Minamata Bay, bisphenol A, floods and climate change with the sole purpose of learning from them.

The second volume also includes a review of four new potential risk areas, including GMOs and nanotechnology, and there is a treatment of various cross-cutting themes such as a) the economic costs of doing nothing, b) the precautionary principle and over-regulation, c) risk governance, d) progressive business and e) the possibility of compensation for victims and protection of ‘early warning’ scientists.

The Agency’s second Late Lessons report clearly shows that there are few historical cases of overregulation when it comes to the protection of human health and the environment. It identifies the pressing need to rethink risk assessment to better protect health and the environment and for more funding for environment and health research.

The report also notes that, contrary to conventional perception, preventive measures do not strangle innovation, but to an extent lead to innovation both by industry and regulatory agencies. In addition, it is clear from the EEA report that market mechanisms need to factor in the environmental and health costs caused by activities and products. Lastly, it is important to promote cooperation between business, government and citizens in order to protect the environment and health and innovation.

The precautionary principle and risk of overregulation

Most of the case studies discussed in the EEA reports are cases where the regulatory authorities failed to apply the precautionary principle and ignored early warnings of risks.

In discussions of the precautionary principle, you often hear the argument that public fears are unwarranted and that the widespread application of the principle will lead to overregulation of small or non-existent risks. In order to investigate whether overregulation is something we should be concerned about, we reviewed the scientific and semi-scientific literature for cases where government regulation was implemented with reference to precaution and where the implemented regulation later proved to be unnecessary.

In total, we identified 88 cases which have been cited as examples of overregulation. After further analysis, it appeared that most of these 88 cases were either real risks, such as climate change, or cases where it is still being deliberated whether there is a real risk. After scrutinising the scientific literature on each of the 88 claimed cases of overregulation, we identified only four cases where regulatory actions were taken to address a risk which later turned out not to be real.

The cost of overregulation in these cases appears to have been primarily economic. Our analysis demonstrates that fear of overregulation is excessive and should not be used as a reason not to implement risk reducing measures. Overregulation does not seem to happen very often − especially when compared to the number and frequency of cases in which we have failed to apply the precautionary principle.

There is a need for new approaches to characterise and prevent complex risks and to move the debate from being problem to solution-oriented.

The vital role of research in environmental and health protection

There seems little doubt that the needs of academic researchers differ significantly from the needs of regulatory bodies. As Philippe Grandjean and his co-authors point out in their chapter of Late Lessons, a large part of academic research is focused on a small number of well-studied environmental chemicals, such as metals.

Research into potential hazards and emerging risks on the other hand appear to be very limited. The choice of research topics should better follow the societal need for knowledge on poorly known and potentially dangerous risks; and research should complement and expand the current knowledge − rather than repeating and validating existing knowledge.

Research is always influenced by scientific uncertainty, and many of these uncertainties mask a real link between an environmental hazard and its negative effects. This results in an underestimated risk and a failure to implement an appropriate intervention.

The precautionary principle and emerging technologies

We hear everyday about new and innovative technologies such as nanotechnology. Many current and future applications of nanotechnology are expected to generate significant social and environmental benefits. But a key question is whether we have learned from past mistakes when it comes to nanotechnology or whether we are about to repeat them?

The chapter on nanotechnology in the second Late Lessons from Early Warnings discusses the extent to which the twelve ‘Lessons Learned’ summarised above, have been implemented or properly addressed when it comes to nanotechnology. It turns out that policy makers have not yet addressed many of the shortcomings in the current legislation and risk assessment methodologies, which in turn threatens to stifle the ability of society to ensure the responsible development of nanotechnology.

The economic costs of doing nothing

Mikael Skou Andersen and David Owain Clubb start their chapter on ‘Understanding and accounting for the costs of inaction’ by noting that the current political decision making process sees politicians respond to early warning signals of environmental hazards only after the costs of inaction have been estimated. Through a series of case studies, the two writers show how early warning signals can provide a basis for estimating the costs of inaction, when the science is less consolidated.

For example, in the phase-out of ozone-depleting substances, it turns out that global warming actually makes the cost of doing nothing significantly higher than originally thought. This is a reminder that the figures for the costs of inaction have often been grossly underestimated in the past. Therefore, the cost estimates should not be left to economists alone, but rather be seen as a starting point for a broader discussion between people with relevant expertise in health, ecology, demography, modeling and science.

Will we learn the lessons?

Although Kundzewitz discusses the problem of floods in his chapter in the second Late Lessons, his account of what seems to be illogical cycles of repeated human error is generally applicable. Typically, it seems that a destructive event such as a major flood generates widespread enthusiasm for strengthening various emergency response systems and to initiate research and implement long-term monitoring.

For example, after a flood, the relevant authorities often prepare ambitious plans. After some time without problems, the willingness to focus on and initiate mitigation research and long-term monitoring projects is scaled down or suspended. When the next flood occurs, a new cycle starts. This seems to be a general tendency for many of the cases that the European Environment Agency’s authors write about.

With the release of the second Late Lessons report, one might hope that we begin to learn from our past mistakes and that we can now combine the precautionary principle with our knowledge of complex environmental and health risks so that there will never be a need to write a third volume in the series Late Lessons from Early Warnings.

Steffen Foss Hansen (sfh@env.dtu.dk) is Associate Professor, Department of Environmental Engineering, Technical University of Denmark and one of the authors of Late Lessons from Early Warnings: Precaution, Science, Innovation.