The modern relationship between science and society has developed as one characterised by mutual benefit, often described as an unwritten contract. Society has granted science great freedom (as well as financial and institutional support), and in return, science has produced knowledge that has benefited society. (See also Research values and Research and society.) Research-based knowledge has spurred technological development and functioned as a factual basis for political decisions. In both cases, science has crucially been trusted as lighting the way to truth and facts. In practice, scientific risk assessments are often used as a basis for decisions. Making a decision almost invariably involves taking a risk, so to legitimise political decisions it has become essential to be able to say that they are based on scientific knowledge. Traditionally it has been deemed relatively unproblematic to maintain a clear division of labour between the risk assessments made by researchers (based on scientifically established facts) and political risk management and decision-making (balancing priorities between different values and interests).
It has become increasingly clear, however, that in many real-life situations, scientific knowledge falls short as a basis for decision-making, and that the distinction between science and politics is often far from obvious. (See also The politics of research ethics.) One reason why this has recently become evident is that humans exert an increasingly profound influence on their surroundings and seek to control increasingly complex systems. For example, new technology has entailed consequences that until now were regarded as not only highly unlikely, but that were even beyond imagination. Global climate change provides an obvious example. Scientific risk assessments are nevertheless used a basis for decision-making. In situations characterised by lack of knowledge about probabilities or consequences, this is fraught with problems. These are situations characterised by uncertainty, rather than by calculable risk.
Uncertainty may be a temporary state, pending science's development of new knowledge. In other cases, we can observe that long-term consequences of human activity that interact with complex natural and social systems in practice remain unpredictable. For this reason, there are people who claim that we need to take into account the situation of permanent uncertainty in which we sometimes find ourselves. In addition, it is a characteristic of complex systems (such as ecosystems or society) that a number of equally credible outcomes or future states may be put forward. It is therefore quite obvious that science makes choices when explaining reality, and that different models, different academic disciplines and different scientists describe reality in different ways. Thus, risk assessments depend on the types of scientific knowledge which are fed into them. This may have an impact on the authority of science as a basis for decision-making.
How can we make sensible and appropriate decisions in the absence of certain knowledge? In some cases, a decision may be postponed while awaiting more knowledge. However, not all decisions can be postponed (quick action may sometimes be called for, as is the case for a number of environmental problems), and furthermore, as noted above, not all uncertainty is reducible or temporal. In some cases, we may have to learn how to live with scientific uncertainty. One approach to problems characterised by non-reducible uncertainty is to include different types of knowledge (from sources other than science).
Gradually, decision support tools (such as multi-criteria analysis, which is used to structure and analyse specific problems and decision-making processes) have also been developed, as well as principles intended to help in making decisions in the absence of scientific knowledge. One of the best known principles is 'the precautionary principle', which has been incorporated into a number of regulatory documents, and environmental legislation in particular. It is especially applicable in situations in which it is suspected that the use or emission of a certain chemical substance, for example, may cause harm to the environment, even though no scientific documentation of this risk is available. ´The precautionary principle´ may be formulated as in the Rio Declaration (a declaration of 27 principles that emerged from the United Nations World Conference on Environment and Development in 1992): ): 'Where there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures to prevent environmental degradation' (UN 1992, Annex 1, Principle 15). This is a so-called weak version of the principle. This means that the negative formulation ('shall not be used as a reason', emphasis added) does not impose specific action, but is restricted to pointing out that lack of scientific knowledge shall not be used as justification for a failure to enact measures. In combination with the formulation stating that measures should be cost-effective, this means that in reality, this principle has very few practical applications. The UN research ethics committee has proposed a stronger formulation: 'When human activities may lead to morally unacceptable harm that is scientifically plausible but uncertain, actions shall be taken to avoid or diminish that harm' (UNESCO-COMEST 2005). However, management that relies on such general principles, including the latter and more radical version of the precautionary principle, often runs into challenges when they are to be applied in practice. For example, who decides, and how can they decide what constitutes morally unacceptable harm?
This article has been translated from Norwegian by Erik Hansen, Akasie språktjenester AS.