20th World Congress of Philosophy Logo

Bioethics and Medical Ethics

Genetic Engineering and the Risk of Harm

Matti Häyry & Tuija Lehto

bluered.gif (1041 bytes)

ABSTRACT: There are many risks involved in genetic engineering. The release of genetically altered organisms in the environment can increase human suffering, decrease animal welfare, and lead to ecological disasters. The containment of biotechnological material in laboratories and industrial plants contributes to the risk of accidental release, especially if the handling and storage are inadequate. The purely political dangers include intensified economic inequality, the possibility of large-scale eugenic programs, and totalitarian control over human lives. How should the acceptability of these risks be determined? We argue that the assessment should be left to those who can be harmed by the decisions in question. Economic risks are acceptable, if they are condoned by the corporations and governments who take them. The risks imposed on laboratory personnel by the containment of dangerous materials ought to be evaluated by the laboratory personnel themselves. All other risks are more or less universal, and should therefore be assessed as democratically as possible. If risk-taking is based on the choices of those who can be harmed by the consequences, then, even if the undesired outcome is realized, the risk is acceptable, because it is embedded in their own system of ethical and epistemic values.

bluered.gif (1041 bytes)

The concept of risk is one of the most important elements in consequentialist analyses of genetic engineering and biotechnology. The term, or its linguistic equivalents, can be found in teleological and deontological arguments as well, but the role of the concrete risk of harm is less central within these models. (1)

The paragon of teleological risk-taking is Pascal's famous wager-argument regarding our belief in the existence of God. (2) If God exists, Pascal argued, and if we fail to believe in Him, we stand to lose everything, whereas by believing in His existence we stand to gain an afterlife of eternal happiness. If, on the other hand, there is no God, we can only lose a few earthly pleasures by acting as if there was. Since the happiness and misery we encounter if God exists are infinite, it is always, no matter how small the probability, in our own best interest to place our trust on His beneficient existence rather than on the meagre pleasures of godless hedonism.

The ethos of Pascal's wager can be easily applied to genetic engineering. We can either believe or disbelieve that all living beings have an essence, or nature, which must not, for fear of an unnamed but absolute horror, be tampered with. As our belief in this essence would only cost us a few technological advances we can live without, we should not risk drawing upon ourselves the ultimate punishment by 'playing God' or by otherwise acting 'unnaturally'. (3)

Deontological critics of biotechnology typically argue that the new gene-splicing techniques can involve acts which should never be performed, whatever the consequences. For a proponent of this view, the dangerous element in genetic engineering is not the probability of concrete physical or psychological harm which can ensue from its use, but the likelihood that it can lead people into performing acts which are categorically forbidden. These acts are in some theories linked with the essence of humanity, and in others with the concept of absolute rights or exceptionless duties. (4) The underlying idea, however, is similar in all deontological and teleological views, namely, that the acceptability or unacceptability of taking certain risks depends on the intrinsic qualities of the agent's actions rather than on their actual or expected consequences in terms of human well-being, animal welfare, or harm. (5) It seems to follow from this that it is always the safest policy to prohibit the implementation of new inventions if there are any doubts concerning their moral rightness.

The four main elements of consequentialist decision-making

When genetic engineering is analysed and assessed in a consequentialist framework, four main elements should be taken into account. These elements can be labelled as the benefits, the dampening factors, the costs and the risks.

The benefits — that is, the good things that will probably flow from the development and use of biotechnology — include all the desirable contributions that genetic engineers can be expected to make to medicine, pharmacy, agriculture, the food industry, and the preservation of our natural environment. (6)

By the dampening factors we mean those prevailing rules, practices and arrangements which tend to counteract the benefits of biotechnology either by lowering their quantity or quality, or by promoting their unequal distribution. The attitudes and economic aims of industrialists can prevent corporations from creating products which would maximally promote the well-being of humankind, and useful innovations can benefit only or mainly affluent individuals and nations, leaving the lot of the less well-off unaltered. (7)

The financial costs of genetic engineering are huge, and unless the benefits are even huger, it can be argued that the money should be spent in more worthy programmes.

The main risks of biotechnology are connected with the containment of genetically altered organisms, with their release to the natural environment, and with the untoward social, economic and political consequences of the use of biotechnology.

The first three elements — the benefits, the dampening factors and the costs — can be assessed without employing the concept of 'risk', except in the economic sense. The probability of the good outcomes is not a risk, because, by definition, risk has to do with undesired results. The factors which decrease the probability of the good outcomes are, of course, unfortunate, but as long as they are not produced by biotechnology they cannot be counted among the risks of genetic engineering itself. And costs are not a danger or a risk, they are simply the price we have to pay for any attempt to improve the human condition.

Accordingly, if the benefits, even in an analysis which takes diligently into account the dampening factors, outweigh the costs, then genetic engineering is, in the light of these considerations, a praiseworthy enterprise. But what, then, about the dangers, the risks?

The risks involved in biotechnology

Risk can be defined as the possibility or probability of harm — that is, of a loss, an injury, an unwanted outcome or an undesired result. The main risks involved in genetic engineering are the following.

The release of genetically altered organisms in the environment can increase human suffering (when medical measures are concerned), decrease animal welfare (in experiments or through the use of recombinant DNA-techniques in breeding), and lead to ecological disasters. The containment of biotechnological material in laboratories and industrial plants involves two layers of risk. The first is the possibility of an accidental release in and by itself. Whether or not this will cause any further damage, the escape of an altered organism into the environment is normally seen as an undesired event. The second layer of risk becomes visible in the case of accidental release, and it is the increased probability with which this can produce harm. These are matters which have traditionally been dealt with by systematic risk assessment.

A risk that lies between the 'scientifically controllable' dangers of release and containment, and the more indirect political hazards of biotechnology, is the probability of the inadequate handling and irresponsible use of genetically altered material, prompted by the economic self-interest of research groups and industrial corporations. The difference between this type of risk and the more calculated hazard is the following. In the case of balanced decision-making we can reasonably suspect only the intellectual capacities of those who assess the possible outcomes. But in the cases of inadequate handling and irresponsible use we can also rationally fear that other types of human weakness and immorality are involved.

The purely social and political dangers of genetic engineering include the possibility of increased economic inequality accompanied by an increase in human suffering, and the possibility of large-scale eugenic programmes and totalitarian control over human lives. The risk in these cases is clearly moral rather than technical. If multinational corporations choose to supersede the national products of Third World countries by their own biotechnological substances, millions of workers will in a few years' time be unemployed. And if governments decide to develop racial programmes and surveillance systems based upon the achievements of genetic engineering, the undesired outcome is certain, not possible or probable. The danger is that the decision-makers act immorally, not that they have miscalculated the consequences of their actions.

In debates concerning the risks of biotechnology the social and political dangers are not discussed as often as the hazards of responsible and irresponsible containment and releases. A partial reason for this can be that economic inequality and totalitarian measures are not seen by all as unwanted, undesired, or evil. Another partial explanation could be that the probability of these outcomes is small, especially in the assessment of particular biotechnological innovations or products. It is difficult to see a connection between, say, a technological process designed to produce inexpensive pharmaceuticals on the one hand and the emergence of an unjust, totalitarian political order on the other.

Yet another explanation for the limited scope of the discussion is that many people tend to confuse the genuine political risks of biotechnology with the dampening factors which reduce its beneficial effects. It can be argued that the harm and injustice which may follow the introduction of genetic engineering in a given environment are always caused by social or psychological factors which have no intrinsic connection with the new techniques. If this were the case, then it would indeed be futile to debate the political dangers of biotechnology. But although there are, no doubt, attitudes and structures which can alone bring to the fore the evil aspects of scientific innovations, genetic engineering can also create new types of injustice, and strongly contribute to already existent misery. When this occurs, the possibility of undesired outcomes should be counted among the risks of biotechnology in the proper sense, and discussed as such.

The morality of risk-taking

If risk can be defined as the probability of harm, then how should we define the concept of 'acceptable risk', on which analyses of the morality of risk-taking often centre? Is a risk acceptable if the probability of harm is on a reasonable level, or should we require that the expected harm is also tolerable? The quick answer to this question is that the acceptability of a risk is the product of the acceptability of the expected harm and the acceptability of its probability. But acceptability to whom, and when, and on what criteria?

Industrial corporations have a tendency to treat risks as probable costs. This is not always commendable, because some of the harms inflicted by the production and marketing of goods cannot be easily compensated to those whom the harm befalls. When, for instance, the directors and engineers of an American automobile company noticed that they had produced a car which exploded in a rear crash if the speed was right and the left rear blinker was on, they went on to market the model on the ground that the overall economic loss incurred by the expected lawsuits would be lower than the price of repairing the cars. This decision cost many people their lives and caused others inordinate suffering, and although the statistics were correct, the company's policy was clearly immoral. At the very least, the buyers should have been given the chance to decide for themselves whether or not they wanted to take the risk, perhaps by purchasing the car at a lower price. Death and suffering caused by attempts to make an economic profit are not commensurable with the work and capital invested in the enterprise.

The attitudes of individuals towards the acceptability of risk vary, of course, considerably. But from the conceptual point of view, it is important to notice that the decision to take a risk does not turn better or worse because of the events that follow the decision. This claim can be clarified by two examples.

In the first example, Smith takes a foolish risk by playing Russian roulette with the gun pointed at the head of his sleeping friend, Jones. When he pulls the trigger, the firing pin hits an empty chamber, no bullet is fired, and Jones remains unharmed. She does not even wake up. But although no tangible harm was, in the end, inflicted on Jones, Smith's decision was, nonetheless, foolish, because he imposed an unacceptable risk of death on her. (8)

In the second example, Jones knows that with a probability of one to ten billion she will blow up Smith's apartment by turning on her computer (there is something wrong with the wiring). As we all take much higher risks every day, let us assume that Jones's decision to turn on her computer today is rational and morally acceptable. Let us further assume that today, when Jones turns on the computer, Smith's apartment is blown up. Now, did Jones take a foolish risk today? Probably not. The decision to take the risk was, and still is, in retrospect, rationally and morally legitimate, despite the unfortunate fact that the improbable, unwanted outcome was materialized. (9)

How, then, should the acceptability of the risks of genetic engineering be defined? Our suggestion is that the assessment should in each case be left to those who can be harmed by the decision in question. Economic risks are acceptable, if they are condoned by the biotechnological corporations and governments who take them. The risks imposed on laboratory personnel by the containment of dangerous materials ought to be evaluated by the laboratory personnel themselves. All other risks involved in genetic engineering are more or less universal, and should therefore be assessed — and eventually accepted or rejected — as democratically as possible.

The examples featuring Smith and Jones annul an objection that can be levelled against democratic risk assessment by scientists, industrialists and autocratic political decision-makers. The representatives of these groups can assert, namely, that their expertise enables them to predict with greater accuracy the consequences of policies and actions. If the choices are left to democratic processes, the objection continues, many good outcomes which would have been perfectly safe fail to come into existence, while many undesired results are brought about by the prevailing lack of knowledge.

What this objection overlooks is that the acceptability of a risk for a given group is not determined exclusively by the facts of the matter, but also by the way the members of the group perceive the facts, and by the way they evaluate them. People cannot fully commit themselves to decisions which are based on epistemic and moral values that they do not share. Thus if anything goes wrong with the predictions of the experts, people feel entitled and are entitled to resent the consequences of the authoritarian choices. The risks taken by experts on behalf of others are therefore unacceptable. But if risk-taking is based upon the considered choices of those who themselves can be harmed by the consequences, the situation is different. Even if the undesired outcome is realized, the risk is acceptable, because it is embedded in their own system of ethical and epistemic values.

bluered.gif (1041 bytes)

Notes

(1) A short description of the meaning of the terms 'consequentialist', 'teleological' and 'deontological' is included in M. Häyry and H. Häyry, 'Genetic engineering', in: R. Chadwick (ed.), Encyclopedia of Applied Ethics (San Diego, California: Academic Press, 1998).

(2) B. Pascal, Pensées, § 233 (Paris: Le livre de poche, 1972, pp. 111 ff.).

(3) See, e.g., R. Chadwick, 'Playing God', Bioethics News 9 (1990), Nr 2: 10-15; M. Häyry, 'Categorical objections to genetic engineering', in: A. Dyson and J. Harris (eds), Ethics and Biotechnology (London and New York: Routledge, 1994).

(4) R. Chadwick, 'A Kantian approach to biotechnology', in: R. Chadwick, M. Levitt, H. Häyry, M. Häyry and M. Whitelegg (eds), Cultural and Social Objections to Biotechnology: Analysis of the Arguments, with Special Reference to the Views of Young People (Preston: Centre for Professional Ethics, 1996).

(5) On such views, see J. Bennett, 'Whatever the consequences', in: James Rachels (ed.), Moral Problems: A Collection of Philosophical Essays (New York: Harper & Row, 1971).

(6) H. Häyry, 'How to assess the consequences of genetic engineering?', in: A. Dyson and J. Harris (eds), Ethics and Biotechnology (London and New York: Routledge, 1994), pp. 144-146.

(7) H. Häyry 1994, 146-148.

(8) J. Thomson, 'Imposing risks', in her: Rights, Restitution, and Risk, ed. by W. Parent (Cambridge, Massachusetts, and London, England: Harvard University Press, 1986), pp. 181.

(9) Thomson 1986, pp. 177 ff.

bluered.gif (1041 bytes)

 

Back to the Top

20th World Congress of Philosophy Logo

Paideia logo design by Janet L. Olson.
All Rights Reserved

 

Back to the WCP Homepage