A rough classification
A distinction may already be apparent between science and technology and other scientific activities, such as the study of meaningful (and human-produced) material, such as texts, communication, social institutions etc., and the study of normative contexts, i.e. contexts where the goal may be validity as opposed to the truth, such as ethics within philosophy, law and some areas of theology. As regards mankind itself, aspects of our existence will fall under scientific research (evolution, biology and physiology, etc.), while other aspects will be a part of humanistic and social science research (our actions, texts, artefacts, social organisation etc.).
Research ethics is normative and relates to protecting or strengthening key values that are relevant to research. It may seem natural to conclude from this that scientific research ethics primarily relates to protecting the requirement for truth (or pretensions of truth, truth as a goal etc.). However, this view immediately raises a number of strong objections.
Truth is a problematic value
First, the philosophy of science has long argued that truth is a problematic dimension in a research context. Scientific knowledge is subject to constant and rapid change, and recent insights and theories are often in direct contrast to earlier scientific beliefs, which are thereby refuted or considered to be untrue. With this so-called ‘negative induction’, we infer that all scientific beliefs are ultimately untrue, including, in the end, our current beliefs. Science is essentially fallible. Some philosophers have therefore preferred to work with concepts such as verisimilitude (e.g. Popper 1959) or epistemic probability (e.g. Suppes 1984).
Second, there is the realisation that by no means all ‘truths’ are of interest to scientific research; what is actually meant is truths that are relevant to providing us with a good understanding of the context of natural processes. However, the term ‘relevant truth’ is a relational expression. Relevant to whom and what? This entails a focus on human perspectives which, at least indirectly, introduces aspects of opinion and values in scientific research.
Third, it is also not unproblematic to assume that technological research is just an application of scientific understanding for human purposes. Both the dichotomy between basic and applied research and the linearity from scientific research to technological application have been criticised in recent times (ref.). Without going into too much detail, it is indeed correct to conclude that these correlations are much more complex in practice, particularly in our time, than these assumptions imply.
Thus we can conclude that scientific and technological research ethics does not only relate to the protection of researchers’ pretensions of truth, not in an intuitive unproblematic sense either.
A positive characteristic
So far, this is only a negative delineation, which is not particularly helpful. We would like to know what research ethics means for science and technology research in general, and have a positive stipulation of what this entails.
There is actually a very general and overarching answer to this question. Here we will draw comparisons to a stipulation of ethics stemming from ancient philosophers (such as [C1] Aristotle). These philosophers believed that ethics was about visions of ‘the good life’. They thought holistically and envisioned a life in which individuals could develop their skills and aptitudes in harmony with others in society who could do the same. Thus, ethics has always been aligned with the social aspect of life.
When using this as a starting point, it is tempting to put forward the following hypothesis on research ethics (as part of general ethics) for science and technology (but probably also for other research):
Research ethics relates to visions of ‘good knowledge’ and how this can be achieved.
What we mean by ‘good knowledge’ obviously requires explanation. It has something to do with quality, but this is also something that needs to be explained further. As a sub-hypothesis to what we have said, we would like to introduce two aspects of scientific and technological research ethics that are best dealt with separately, although these too will ultimately be shown to be linked. Scientific and technological research ethics covers two important aspects:
On the one hand, different aspects of good research practice, i.e. the process that leads to scientific understanding;
On the other hand, different aspects of research’s (shared) responsibility for the use of scientific insights and technological applications in society (social responsibility).
Let us examine these aspects in turn.
Good research practice: scientific ethos and FFP
Research practice is by and large based on what is referred to as a scientific ethos, which is a set of norms and values that govern and regulate research. Despite the objections relating to truth we presented above, the scientific ethos is arranged such that science eliminates as many demonstrably untrue notions as possible and only accepts knowledge claims if the potential truth can be substantiated with solid arguments (methodology, data etc.). It could be said that:
Science is a systematic truth-seeking and socially organised activity that produces knowledge.
The renowned sociologist Robert K. Merton was the first to formulate a scientific ethos (Merton 1942), consisting of the norms: ‘communism’ (no ownership of knowledge), universalism (where science takes no account of who has produced the knowledge or where it was produced), neutrality (where researchers endeavour to be neutral in their assessment of hypotheses), and ‘organised scepticism’ (all assertions of knowledge are subjected to peer review and critical appraisal by colleagues). In a publication from 2003 (Kaiser et al. 2003), the Research Ethics Committees argued that transparency, quality and accountability form a fundamental normative basis for scientific work (see also: research values).
The science community follows a set of internal rules (norms) that take into account that scientific knowledge comes about in a dynamic network of researchers and that the individual contributes with their insights, based on the earlier insights of others, and imparts them to the community for critical review and validation. Respect for the contribution of others and affording adequate credit are therefore important in the science community. This is expressed, for example, in the rules for crediting the work of others, i.e. normally in references, and rules for authorship and publishing. Plagiarism of others' work undermines the trust on which the science community is based. The worst form of aberration is cheating (research fraud), such as fictitious experiments or data. Research fraud is therefore often referred to as FFP (fabrication, falsification, plagiarism). Following the implementation of the Act on Ethics and Integrity in Research, allegations of such misconduct are investigated by a special committee. Less dramatic forms of weaknesses in internal research norms are sometimes referred to as QRP (questionable research practices).
Good research practice: objectivity
All research requires fundamental concepts or dimensions that are adequately defined. In biology, a good taxonomy is essential, and likewise in, for example, geology. Physics theories have reduced the fundamental concepts that physicists operate with to a few forces and some elementary particles. Many of the most exciting scientific discussions revolve around which fundamental concepts we must assume and how they can best be defined. Such discussions can be influenced by what different researchers consider important to incorporate into their theory; a question that reflects an implied ethical value. The evolutionary relationship between different species, for instance, has long been influenced by a view of man as the cream of the biological crop.
Much of the research in science and technology is quantitative, i.e. based on measurements of phenomena. Measurement can raise a number of epistemological but also ethical problems, for example, that of finding an adequate unit of measure for the characteristic you wish to quantify. These problems are well known in social science, such as the problem of finding a suitable way of measuring the welfare of society. However, natural science is also faced with these types of problems, for example, how to measure sustainability. The same applies to indicators, where underlying implied ethical values also influence research. One example is the discussion on whether to base environmental protection on the protection of isolated species or the protection of whole ecosystems.
A great deal of science and technology research is based on experiments, i.e. tests that try to control for all parameters except those being measured or tested. An experiment is generally a simulation or idealisation of natural conditions, without actually being naturally occurring. Problems arise when returning from the idealisations of experiment to reality and naturally occurring phenomena. Idealisation is based on preconceived notions of relevant characteristics of reality, which can easily turn out to be wrong.
Other research is sometimes based on observations, either quantitative (e.g. counting the occurrence of an animal species in an area) or qualitative (e.g. observing the animals’ behaviour). Qualitative observations in particular can create a number of research ethics problems. Feminist epistemology, for instance, has revealed how male-oriented descriptions have partly dominated descriptions of primates’ behaviour (Bleier 1984). Once again, it is shown that implied values can also manifest themselves in natural science, despite it being perceived as particularly ‘objective’.
Good research practice: humans, animals and the environment
In science and technology research practice, humans are seldom used as case studies (unlike in medicine, for example), and it is therefore rare for humans to be exposed to specific hazards or risks associated with the actual research process. There are, however, some exceptions: in petroleum exploration and technology, for example, divers are used. Issues relating to informed consent can therefore also arise here. Laboratory research can also sometimes entail certain elements of risk, both for humans and the environment, for example in relation to biological organisms. Standards for laboratory safety are therefore mandatory.
Other research may use laboratory animals, and such research is often associated with discomfort, pain or death for the animals. Animal ethics and animal welfare are therefore relevant considerations in such research, and guidelines and regulations have been devised for the use of animals in research. The principles behind such research is based on the classic three Rs: reduce, refine and replace (Russell & Burch 1959).
This also applies to environmental hazards. Some research may cause an unintended risk to the environment, such as the genetic modification of organisms. Norwegian and international legislation (such as the Norwegian Gene Technology Act) has therefore regulated this type of research in order to minimise such risks.
Good research practice: conclusion
If we take research practice as a whole, understood as a set of internal research rules and research ethics critical actions, there are various norms or sets of norms that should be considered. An earlier publication by the Research Ethics Committees (Elgesem, Jåsund, Kaiser 1997) on research fraud outlined such internal research considerations in a table (see Table 1)
|Internal research ethics
|Violation of the duty to be truthful
|Authorship and giving adequate credit
|Violation of verifiability norms
|Violation of general ethical norms
Selective use of data or methodology to verify the hypothesis
|Plagiarising text, data or ideas
|Deliberate destruction of data in order to prevent others verifying results
Non-compliance with licensing rules, failure to obtain permission
Exposing animals or humans to risks
Deliberate misrepresentation of someone else’s results
Failure to report fraud when discovered
Failure to recall publications that are based on dishonesty or serious errors
Failure to credit co-authors
Failure to cite all sources
Inadequate storage or distribution of data
Publishing in the media without documented results
|Deliberately misleading information in project applications
Table 1: Overview of various ethical norms that are relevant to research practice (Elgesem, Jåsund, Kaiser 1997).
Shared responsibility of research for the use of knowledge: introduction
What happens after new scientific insight is revealed or the way is paved for a technological innovation? Does the researcher’s ethical responsibility end once their work has been quality assured by the science and technology research community? There are undoubtedly differing ways of looking at this and it would be wrong to make categorical assertions. Most researchers agree that research ethics has a large bearing on research practice, as we have outlined above, but many would argue that the scope of research ethics ends there.
The basis for such a view is that the researcher no longer has control over what happens to research results once the findings are published. Others can choose to use the results or to refrain from using them, which also means that they have an ethical responsibility of their own. A company that focuses on a specific innovation will, for example, consider its actions according to corporate social responsibility. In some cases, the research results will be used in contexts that the researchers did not intend, and some researchers may occasionally believe that their research results are being misused. This can lead to heated debates in the public arena.
Given that the individual researcher in practice has few opportunities to prevent such misuse or to control how the results are used, it is reasonable to conclude that the researcher cannot be ethically responsible for such use. The scope of research ethics is thus limited to the purely scientific research process and quality assurance.
Others disagree with this, believing that research ethics includes at least a degree of shared responsibility for the intended use of research results. It is worth noting that NENT has held this view from the very beginning, i.e. it has advocated the view that science and technology research ethics entails a social responsibility. We will now take a brief look at the reasoning behind such a view.'
Shared responsibility of research for the use of knowledge: complexity
One important reason relates to the complexity of modern research. We have already observed how the simple linearity model from basic research to applied research to innovation presents a variety of problems. In modern science, it is highly unlikely that a singular piece of insight will ever form a basis for application and technological innovation. Major investment is often needed and different insights and research fields must be woven together to make the research specific enough for potential applications to be identified. It has also been shown that research is organised in large units or networks in order to arrive at the intended results. One recent example is the Human Genome Project.
Such organisation and use of resources creates its own internal dynamic. If major investments have been made in order to produce specific results, it is very unlikely that the results will not be used as originally intended. In other words, where extensive personnel and financial resources have been ploughed into a project, such as ocean fertilisation, and the findings of the project are as expected, it is highly unlikely that the technology will not be used. It is therefore unrealistic to think that other parts of society, be it industry or the public sector, are in a position to freely consider the use or non-use of scientific insights where these results have been arrived at through research. Principled pro-use decisions are already dominant in strategic choices regarding research policy and in the selection of individual research projects.
None of these decisions rest on the shoulders of one solitary person; there are always more people involved. However, with his/her special knowledge and insight in the field, the researcher has an important say in this process. No one is better placed than the researcher to assess the potential consequences of the research. It would therefore be reasonable to assign the researcher at least a share of the responsibility for the use of knowledge.
In modern research it has become commonplace to talk about targeted basic research. In other words, the degree of freedom that was once normally associated with ‘pure’ basic research has been reduced considerably, with researchers considering the potential utility value early in the research process. If we look at the criteria used to assess applications for research funding - and also publications to a certain extent – it becomes clear that the criteria relating to relevance play a very central role. Research that cannot demonstrate potential benefit, despite being methodologically sound, does not receive support. Many researchers are critical of such a research policy and believe it has gone too far in the direction of pure pragmatism. Criticism may well be justified, but it does not change the fact that many of today’s technological advances are the result of systematic and targeted research with multiple contributors.
It is also a common phenomenon nowadays for specific new insights in science and technology to be more or less routinely assessed in relation to their potential for patenting. This has evolved to such an extent that the difference between discovery and invention (which is fundamental for patenting) is sometimes difficult to spot, particularly in relation to genetics and gene technology. Patenting practice has also changed in that respect over the years.
Such conditions lead us to think that even at the very start of the specific research process - or at least at a very early stage – we are influenced by feedback on notions of future use and application. Thus, the researcher is not only looking back at what we already know, but is also looking ahead to what we want to know in order to achieve certain goals. Consequently, specific research activity is often accompanied by negotiations where both perspectives – backwards and forwards – form part of the premises or arguments. The researcher is an active participant in these ‘negotiations’, and in fact ‘qua researcher’, i.e. in their research role. Thus it is natural to conclude that the intended use of scientific knowledge must also be covered by the researcher's ethics and entail a shared responsibility.
Shared responsibility of research for the use of knowledge: the role of science
However, there are even more conditions that indicate a need for social responsibility in research ethics. One important reason is that the relationship between science and society has undergone important changes. Never before have science and technology meant so much to our lives – for society, business and the economy – as it does today. If we take the example of Norway’s largest export revenues, from oil, fisheries, aquaculture and gas, the importance of the role of science and technology research here is immediately apparent. However, research also plays a major role in most people’s daily lives, in areas such as information and communications technology, medical technology and industrialised food production etc. Research puts a definite stamp on our lives. The need for a knowledge society is therefore a popular topic of political discussion. This also means that research and technology must be part of a democratic debate about how society should evolve. This is a debate about values, where the research community is a contributor and the research itself is the subject of debate.
A good example of this is the debate on gene technology, in particular genetically modified foods. In the public sphere – in Norway as in other European countries – there has been lively debate about the potential, risks and limitations of this technology. Some researchers have been proponents of an almost unlimited optimism with regard to the technology, while other researchers have pointed out the risks and the major scientific uncertainties in the field. Sections of the public have argued that widespread use of the precautionary principle is the ethically correct approach (ref. ‘Fast salmon and technoburgers’, Research Ethics Committees 1996). Although such debates seemingly revolve around advanced scientific and technological issues, they are not exactly purely academic, scientific debates. One of the most common mistakes many researchers make in such debates is to attribute the public’s attitude to a lack of knowledge. In other words, if people had more knowledge they would agree with, for example, the technology optimists. This attitude has been described as a ‘deficit view’. Several studies have, however, shown just how flawed such a view is. For example, studies were conducted to measure both knowledge and attitudes about a subject area (Hviid Nielsen, Seippel and Haug 2003). These studies illustrated that increasing knowledge did indeed reduce the proportion of those who had no particular opinion, but also that the proportion of those who were in favour of the technology and the proportion of those who were against both saw a corresponding increase.
There is therefore reason to believe that, in reality, such debates are based on different ethical values in society. In a pluralistic society, it is not surprising that people have different ethical values and that these are brought to bear in debates about science and technology. As long as it was thought that the debate was about knowledge-related factors, a distinction could be maintained between experts and others, while in relation to values, no party has any special preferential right over another or more insight than another. The debate is part of a living democracy. In this debate, researchers often act as interested parties (stakeholders), and are less commonly perceived as independent agents.
It could therefore be argued that many important debates on ethics in science and technology involve an encounter between differing values as asserted by different parties. A distinction could perhaps be made between such ethical debates and research ethics by, for example, characterising these as ethics of science (ethics of science vs research ethics), but in Norway it has become increasingly common to use the term ‘research ethics’ for both.
One consequence of this is the expectation that research ethics must also be pro-active, i.e. hold debates prior to developing scientific technology rather than after the technology has been created. In the literature, this is often described as taking an ethical debate ‘upstream’; a metaphor to express the notion that problems should be dealt with at source as opposed to afterwards, when problems have already arisen. Having a broad societal debate with engaged parties is nowadays considered to be an essential part of good governance for science and technology. Participatory methods to activate different social groups are an important instrument in such efforts. Research ethics shares this task with technology assessment activities.
In light of such considerations, some researchers believe it is time for research to draw up a ‘new contract with society’, i.e. acknowledge a role where researchers actively take part in such debates, exercise social responsibility and reflect universal values in the research process.
Research ethics problem areas: interest, commissioned research and finances
There are large areas of science and technology research that do not normally give rise to any major ethical problems or dilemmas. Research on areas such as geological formations in a region will seldom entail ethical issues. It may of course be that this research has a bearing on conservation orders or development plans for that region, which could lead to potential conflicts that require ethical vigilance. Such vigilance will be particularly important when the researcher has been commissioned by one of the parties involved. Commissioned research is often a challenge in terms of research ethics for all parties concerned, and the researcher is sometimes put under pressure by the commissioning party.
Another possible example is mathematical research. In principle, it is difficult to argue that research in algebraic theory or chaos theory, for example, entails research ethics issues. However, as soon as such research is used as an instrument in a different context, this can change. Mathematical instruments can reflect a precision which, in some contexts, is misleading (Porter 1996), such as has been claimed in the use of statistics. One problematic area relates to the media, who are often looking to quantify a phenomenon. Researchers may be tempted to pass on insights from mathematical models without mentioning the limitations that can accompany such models. Once numbers are thrown into a societal debate, they are very hard to put in their relevant context.
One major research ethics challenge is that much science and technology research is increasingly being influenced by, and is partly dependent on, commercial interests. Research and knowledge have become commodities in a global market, where a competitive edge is a crucial factor. States try to keep up with the global production of knowledge because it has direct economic consequences. Efforts are made to attract the best researchers internationally and home-grown researchers are sent abroad to the most renowned research centres in the hope they will return with knowledge that can place them at the leading edge of research. This commercialisation of research raises a number of ethical issues, not least in relation to the public’s expectations of independent research and research related to ‘softer’ values.
Research ethics problem areas: risk, security, sustainability, manipulation
Some areas of science and technology research directly involve value-related aspects. One example is risk research. The term ‘risk’ may on the one hand be given a simple definition such as:
risk = pA (probability of an event) x N (impact/utility/value of the event)
risk = expected loss of utility
On the other hand, this ‘technical’ definition does not capture all of the aspects that are often associated with risk. Psychological factors come into play, which means the same probability and the same harm can be perceived differently, such as in relation to whether you believe you have control or if you are exposed to the risk of others. There is extensive literature on the concept of risk, and a relatively well-acknowledged conclusion is that the concept of risk is socially constructed (Shrader-Frechette 1991). It can therefore be said that all risk research raises qualitative and value-related issues, thus requiring reflections on research ethics.
A different, but related, example is security and protection of privacy, which is relevant in areas such as computer technology. Here, we are often faced with an ethical dilemma when weighing up the value of information that is provided by someone and stored in a databank versus the right to privacy, which is the right of all citizens in a society governed by law. Here there is a dynamic relationship between legislation on the one hand and technological development on the other.
Another example is research on sustainable development. Although much of the research relates to natural systems, it has gradually been shown that sustainability is more involved than this, with social and economic aspects, for example, also being pertinent. As a rule, we are confronted with the interplay between man and nature, and which aspects we take for granted and which aspects variably depend on value-based attitudes. Researchers such as Mary Douglas and others (Schwartz and Thompson 1990) have also observed that the concept of nature is not an objective scientific dimension, relying instead on non-scientific preconceived cultural notions.
We have already discussed how gene technology research is considered in some quarters to be ethically challenging. Although it is difficult to come up with ethically sound arguments claiming that all interference with DNA is unsafe (as some will nevertheless claim), it is however a fact that at least some areas of this research raise ethical issues of such a complexity that a thorough reflection on ethics is necessary. One such area is synthetic biology, where new life forms are constructed ‘from scratch’. Nevertheless, gene technology is a type of research that - not least in light of society's scepticism towards some applications - requires a high degree of ethical reflection, including by the science and technology researcher.
Reflection on the factors mentioned above has led to an awareness that scientific uncertainty is a much more complex dimension than normally assumed, and that a good grasp of scientific uncertainty is a prerequisite for good research ethics in science and technology. There is reason to claim that scientific uncertainty is not only the result of unfinished or imperfect research, but that it is inherent in all scientific knowledge. More research produces more knowledge, but also more elements of uncertainty. Some of the uncertainty relates to the knowledge itself and how we produce it, but some of the uncertainty can also be due to the reality, for example that we are dealing with stochastic or complex systems (Walker et al. 2003).
Quality and research ethics
We have shed light on various aspects of research ethics in science and technology. To start with, we characterised research ethics tentatively as a vision of ‘good knowledge’. The subsequent examination of different topics we believe has demonstrated what qualities can be expected from good knowledge and research. ‘Good knowledge’, the knowledge we like to profess, is thus knowledge that is of high quality in relation to both the actual research process and to social responsibility. ‘Good research’ avoids the pitfalls and breaches that may result from various temptations, it respects society's rules and laws, it is ethically reflected in relation to consequences, and it is discussed in a democratic public debate if necessary.
Implicitly, we have argued that research ethics is more than just filling out a few forms, such as for an application to the Research Council. Research ethics is not an administrative matter that requires rubber-stamping. A proactive approach must be taken to research ethics in the relevant research, and independent reflection is essential.
In this respect, it is always helpful to have a basis or some guidance in relation to where the focus should be. The Research Ethics Committees, and more specifically for these disciplines NENT, have therefore devised research ethics guidelines. The guidelines are advisory and intended to stimulate the researcher’s own reflection. They also express a certain degree of consensus among peers. Researchers are advised to familiarise themselves with these guidelines and use them when considering the research ethics aspects of their own research.
Bleier, R. 1984, Science and Gender. A Critique of Biology and Its Theories on Women, Pergamon Press: New York, Oxford, Toronto, Sydney, Paris, Frankfurt
Elgesem, Dag; Jåsund Kjetil og Kaiser Matthias: ”Fusk i forskning” [Cheating in research]. The Norwegian National Research Ethics Committees. Writing series no. 8. 1997
Hviid Nielsen, T., Seippel, Ø., Haug, T. 2003, Hva mener og vet nordmenn om bioteknologi? [What do Norwegians think and know about biotechnology?], Working note no. 20, Center for Technology, Innovation and Culture, The University of Oslo
M. Kaiser, K. Rønning, K. W. Ruyter, H.W. Nagell og M.E. Grung: Oppdragsforskning – åpenhet, kvalitet, etterrettelighet [Commissioned research – openness, quality, accountability]; The Norwegian National Research Ethics Committees 2003
Merton, R.K. 1942 “The Normative Structure of Science”, in: R.K. Merton, The Sociology of Science: Theoretical and Empirical Investigations. The University of Chicago Press. 1973
Porter, T.M., 1996, Trust in numbers. The pursuit of objectivity in science and public life, Princeton University Press: Princeton
Russell, W.M.S og R.L.Burch 1959, The Principles of Humane Experimental Technique
Schwartz, M. og Thompson, M. 1990, Divided we stand: redefining politics, technology, and social choice, Harvester Wheatsheaf: New York
Shrader-Frechette, K.S. 1991, Risk and Rationality, University of California Press: Berkeley, Los Angeles, Oxford
Suppes, P. 1984, Probabilistic Metaphysics, Oxford
Walker W, Harremoës P, Rotmans J, Van der Sluijs J, Van Asselt, M.B.A., Jansen P og Krayer von Krauss, M.P. “Defining uncertainty: a conceptual basis for uncertainty management in model-based decision support”. Journal of Integrated Assessment 2003;4(1): 5–17
Beder, S. 1998, The new engineer – Management and professional responsibility in a changing world, Macmillan education Australia: South Yarra
Kaiser, M. (2000). Hva er vitenskap? [What is science?] Oslo: Universitetsforlaget
MacLean, D. (red.) 1986, Values at risk, Rowman & Allanheld: Totowa, New Jersey
Popper, K, 1959, Conjectures and Refutations, (rev. 4th ed. 1972), London
Resnik, D.B. 1998, The ethics of science – an introduction, Routledge: London, New York
[C1]Either "such as" or "for example" can be used, but not "as e.g.".
This article has been translated from Norwegian by Carole Hognestad, Akasie språktjenester AS.