1 Introduction

After years in the realm of science fiction, the main manufacturers of self-driving cars agree that, by 2020 at the latest, it will be possible to purchase fully autonomous or self-driving cars. These are the end product of a long and still unfinished process of gradual automation of vehicles, from the completely manual car to a driverless one able to circulate safely when faced with any imaginable situation.Footnote 1 Everything indicates that mass sales in the near future will mean the start of a new era in road traffic circulation. As humans are replaced by reliable software that neither drinks alcohol, suffers from stress, nor ignores traffic regulations, accidents will be reduced by 90%,Footnote 2 while at the same time people who, up to now, have been excluded from driving independently (e.g., children, the elderly, and the disabled) will be able to take to the road under equal conditions.Footnote 3 The greater safety and reliability of interconnected autonomous cars that can share important information immediately will lead to shorter safety distances between cars.Footnote 4 Self-driving cars can be distributed much more efficiently on the same road network, and a significantly higher number of vehicles will be able to move faster and safer.Footnote 5 Added to this is the promise of vastly lower rates of global pollution.Footnote 6

However, despite the drastic decrease in accidents that will come with replacing a human driver with a computer, experts in the field unanimously agree that even self-driving cars will not be completely free from risk if involved in traffic accidents.Footnote 7 Not only will the cars have to contend with unexpected behaviour from human drivers during the long transition period from manual and semi-autonomous to self-driving carsFootnote 8 but, even in a world where all cars are driverless, the risk of accidents will not be zero. It is feasible for the software to make errors or be hacked into, also for a self-driving car to be wrongly programmed or have a mechanical fault. Neither can unforeseen outside circumstances (careless pedestrians, bad weather, wild animals, etc.) be discounted as crash risks for self-driving cars. For strictly physical reasons, no self-driving car, however sophisticated the sensors and control software, will be completely free from the risk of causing or being involved in an accident.Footnote 9

The very idea of a self-driving car causing damage poses important questions to civil and criminal law that have yet to be answered.Footnote 10 However, going beyond the question of who is legally responsible for damage caused by an autonomous car (the manufacturer, the programmer, the user, etc.), the fact that self-driving cars will be involved in emergency situations gives rise to a significant ethical and additional legal challenge: self-driving cars must be programmed to respond to situations of necessity where breaking the regulations, or causing damage, or not avoiding it, are inevitable.Footnote 11 This is to say that self-driving cars must have a “moral exception” capability, pre-set algorithms regulating the modus operandi in emergency situations.Footnote 12 Note that this is not simply pre-setting the conditions under which a car can, for example, exceed the speed limit when transporting the occupant to hospital, but these vehicles must have patterns of behaviour in situations of necessity where safeguarding a specific interest unavoidably demands another to be injured.

In fact, evaluating types of behaviour harmful to others in emergency situations is a classic topos in philosophical and legal discussions, especially in criminal law, where the theory of justification (self-defence, necessity, etc.) has been facing these problems for time immemorial. However, on one hand, the development of self-driving cars makes the ethical problem more pressing, particularly in criminal law, insofar as the need to set up general behaviour patterns in advance for excepted cases closes the door to evasive solutions that are both common and apparently practical. Assuming that the person responsible for programming the “moral exception” for the car is usually the designer,Footnote 13 or users, if they are able to act on the programming in some way,Footnote 14 the question of how to assess, under criminal law, damage caused by the car cannot be settled by stating that it was a purely instinctive and uncontrollable reaction, or that, despite being an unlawful action, the harm cannot be attributed subjectively, due to lack of blame. This means that it will no longer be possible to avoid providing solutions to such dilemmas, in the hope that they will be resolved in specific cases outside the rules or institutionalised patterns, in an attempt to prevent the legal system from “dirtying its hands” by accepting the legitimacy of fatal outcomes. Therefore, it is not enough to find a plausible way of laying foundations for the impunity of the person acting in a tragic incident, but a clearly defined rule must be established that aims to be generally valid for the dilemmas that may arise. On the other hand, unlike classic situations of necessity, the impossibility of knowing the identity of specific future victims of a programming decision promotes the opportunity of legitimising solutions in situations of necessity. Unlike Carneades’ shipwrecked sailor, who saved his own life by killing his companion on the plank, the programmer sets a generic pattern of conduct for an uncertain conflictive situation in the future, without knowing at the time the specific identity of the potential victim of the decision taken when programming the car. Moreover, it is not even possible to know if the car will ever be in the problematic situation for which the solution is provided.

However, this paper assumes that the problem described cannot be pushed into a law-free space, nor should it be left totally in the hands of each automobile company, or to individuals when they set-up the algorithm. Complete prohibition of autonomous cars is also not an acceptable solution.Footnote 15 Thus, however thorny the task, developing self-driving cars demands clear and precise guidelines to be provided for objectively solving dilemmas, even when human lives are at stake.Footnote 16 This paper aims to make a contribution, not by asking questions, but by attempting to give specific answers to equally specific problems. To this end, the following Sect. 2 gives five moral dilemmas to serve in testing the various theoretical approaches to the problems shown. Following this, the two great premises defended up to now in the emerging discussion on ethics and law are presented and analysed. The Sect. 3 examines the feasibility of the rationale that advocates giving priority, without exception, to algorithms that protect the occupants. And, in the Sect. 4, there is a critique of the move to settle the problem by utilitarian rationale. In Sect. 5, common approaches in the most recent discussion are rejected, and it is proposed that the problem can be tackled based on the conceptual apparatus of criminal law on causes of justification. After all, most of the moral dilemmas posed in current ethical and legal discussions come within criminal law. The idea is to offer solutions to the dilemmas based on rules that can be legitimised—first of all—by keeping the victim of the decision in mind, and by appealing to the principles of autonomy and solidarity as the axioms governing causes of justification in criminal law.

2 Five Dilemmas for Autonomous Cars

This short section contains five dilemmas that serve as focal points for discussing the various solutions outlined in the doctrine and as a touchstone for the theoretical approach explained here.

(1) As the result of a sudden brake failure, a self-driving car can only save the life of its sole occupant (O) by forcing the car onto the pavement where a pedestrian (P) is walking, who will certainly die from the crash.

(2) A self-driving vehicle is driving at the correct speed along an inner-city road when a group of about 10 young adults starts to cross the road recklessly and rapidly. To avoid an accident, the car swerves toward the hard shoulder. However, moments later, at the point of the hard shoulder where the car is predicted to hit the crash barrier, the sensors pick up an elderly cyclist, who will almost certainly be killed. The passenger of the self-driving car will not be significantly injured under any circumstance.

(3) The life of the passenger of a self-driving vehicle can only be saved if the car enters the emergency braking lane. Unfortunately, the software immediately finds that two children are playing in the sand in the lane and will certainly be killed if hit.

(4) A self-driving car finds itself with the following dilemma: either it brakes to avoid running over a careless pedestrian who crosses the road suddenly, but the motorcyclist behind who is following too closely will die in the crash against the rear window; or the car does not brake and runs over the pedestrian but saves the life of the motorcyclist behind.

(5) A self-driving car in the right-hand lane of a motorway sees that the only way to avoid hitting from behind a motorcyclist, who is not wearing a helmet, is to cross into the second lane and hit another motorcyclist who is wearing a helmet and modern protective gear. If the motorcyclist with no helmet is hit, he will die, while, if the second one is hit, he will only suffer mild injuries. The occupant of the car will be unharmed in either case.

3 Selfish Self-driving Cars?

In the academic discussion, still in its infancy, on how to program self-driving cars, there are voices who agree with the premise of ethical egoism and support emergency algorithms always being set up to promote the interests of the passenger in the driverless car.Footnote 17 Thus, according to Nick Belay, “all self-driving cars must act in the interest of their passengers over anything else. The idea of self-preservation is both ethically neutral and societally accepted”.Footnote 18 Therefore, in example (1) above, the self-driving car must be programmed to cross onto the pavement and kill the pedestrian, as this is the only way the passenger will remain unharmed. This solution is also the only one that could get round the “prisoner’s dilemma,” resulting from solutions from utilitarian programming.Footnote 19 Despite the fact that development and sale of self-driving cars will radically reduce the global rate of accidents, knowing that an autonomous car is going to seek maximum social benefit in case of accident, even if it is to the detriment of its passengers, will act as a strong disincentive to purchasing such vehicles.Footnote 20 Who would be willing to pay for a car that would sacrifice his life to save three children? If it turns out that people do not want to buy cars that will not protect them from any eventuality, developing self-driving cars is no longer a viable economic proposition, and any possibility of a huge reduction in those injured and killed on the roads disappears along with it.

Although it is not necessary to develop the many arguments that, for a long time, have been convincingly put forward against “ethical egoism”, this is certainly the place to reject the possibility of raising this theory to the level of inspirational model for crash algorithms. Although it is technically possible for a car owner to be able to program the vehicle’s software to his liking to deal with such eventualities,Footnote 21 the aim of regulating conflicts involving several people and based on the single criterion of maximum benefit to the individual interests of each person is normatively unacceptable. On the one hand, it would generate huge logical, empirical, and regulatory contradictions, since it would generalise, in vain, the pattern of conduct among all the selfish people involved in the dispute. On the other hand, a liberal legal system based on the principle of personal autonomy and ideas of rights and responsibilities cannot find in the interest of a single individual a sufficient motive to justify harming the right of others.Footnote 22 This means that the legal position of the pedestrian calmly walking along the pavement in example (1) cannot be changed simply because a driver wanted to avoid being killed at all costs. Or, what amounts to the same, the mere wish of the passenger in the car, even if his life is in danger, does not make a homicide lawful. Otherwise, the idea of the law as a mechanism for rational endorsement of the spheres of freedom would be completely undermined.Footnote 23

However, is rejection of the thesis of ethical egoism an insurmountable obstacle in the development of self-driving cars? In my opinion, the dilemma described above can and must be resolved by a legislative decision to control the direction of crash algorithms. As we shall see in the next section, this does not have to be overwhelmingly based on a utilitarian proposal, and, therefore, the terrifying idea of a car programmed to sacrifice its passenger on behalf of social usefulness disappears from the collective imagination of potential car buyers.

4 Utilitarian Self-driving Cars?

To the question of how crash algorithms should be programmed, several authors have already responded by referring to the principle of minimising damage, or the lesser evil.Footnote 24 In accordance with this approach, in the second dilemma (2) above, the self-driving car must be programmed to hit the elderly cyclist and avoid running into the group of young people. If all the interests at stake were put on a pair of scales, assessed and weighted, the side holding the lives of 10 young adults would be seen to be more worthy of protection. Although this premise is commonly labelled “utilitarian”, not without criticism, the truth is that the theory of the lesser evil is, in fact, a theory that legitimises consequentialist and collectivist modes of conduct. It is consequentialist because the solution to the conflict is decided by the result expected to be obtained, and collectivist in that it deals with society as a whole and, more specifically, the people involved in the emergency situation, as if they were one person, by massing or joining their several interests—hitherto, particular to each individual—into a single supra-individual body (holistic entity).

Going beyond the classic arguments in the defence of utilitarianism are two fundamental arguments outlined in favour of this premise for the purposes that are important now. In the first place, from a normative perspective, it is claimed that rulings on legal dilemmas involving a self-driving car must necessarily obey the same principle governing conflicts between humans.Footnote 25 This is generally seen in the principle of the lesser evil, which would in addition be the basis for the defence of necessity.Footnote 26 According to this way of understanding necessity, criminal law would consider legitimate the conduct safeguarding a greater interest than the one that is harmed (choice of the lesser evil). The principle of the lesser evil, as a criterion for making decisions in situations of necessity, would not be objected to as such and would constitute a basic criterion of consistency by putting into practice the minimum component of rationality when acting in conflict situations.Footnote 27 This principle would also have the “charm of simplicity”, as well as a high degree of psychological and social acceptance as a criterion for conflict resolution.Footnote 28 Secondly, from a pragmatic point of view, defenders of the utilitarian approach assert that self-driving cars, as with any advanced computer, would be able to process the computation of usefulness much faster and more reliably than human beings. Unlike humans, who do not have the capacity or time to add and quantify the usefulness at stake in emergency situations, self-driving cars can make the calculations and provide specific answers immediately.Footnote 29

Although the classic rationale against utilitarianism need not be repeated here as a theory for legal argument, it can be stated at present that, in my opinion, it would be unsatisfactory to base the algorithms for self-driving cars on the principle of the lesser evil. First, to the contrary of what apologists for utilitarian cars say, it is true that the idea of a self-driving car that can make infallible and indisputable decisions on adding and comparing usefulness is still far from being a reality.Footnote 30 Going beyond the strictly technical problem, the factors to be taken into account for the weighting process are not easy to determine, meaning that it is unclear which elements are put on each side of the scales, nor how they can be translated into a common denominator that gives a truly unbiased comparison. If, on the other hand, as some authors point out,Footnote 31 the theory of weighting interests as a method is only capable of providing indeterminate comparative rules and guidelines, but no specific solutions, the theory will certainly not help in programming software that expects definite answers and has no sense of legal sentiment or pragmatic reasoning to solve dilemmas.

Second, in any event, although self-driving cars could be effectively programmed to make proper weighting assessments such as those just described, the principle of the lesser evil or the dominant interest would not be a rationale for conflict resolution materially acceptable in the framework of liberal criminal law. Rather, the collectivist contention of the lesser evil is empirically incorrect and axiologically indefensible in our legal system.Footnote 32 It is empirically incorrect because it is not aware that most of contemporary criminal law is not based on a collectivist principle, such as the lesser evil, even in situations of necessity. This is the explanation for why the law recognises the right of an owner to destroy an interest, even if it is just on a whim, or basic institutions in criminal law such as self-defence, or differences in sentencing between negligent and intentional crimes, neither of which can be explained from a purely utilitarian perspective. Further, the concept of the lesser evil, as a model for configuring emergency algorithms for autonomous cars, is unacceptable in law, since it errs in defining the ultimate mission of the legal system and the rationale for necessity. Collectivist premises see the people involved in a dispute as mere beneficial owners of legal assets or interests that, strictly speaking, belong to a holistic entity—society—that aims to protect as much as possible. The least important is who an interest belongs to in concreto.Footnote 33 From this perspective, it is certain that the principle of the lesser evil is one that is difficult to dispute. However, this clings to a false analogy, in the same way as rational individuals generally create and resolve their own conflicts. The fact that I would generally prefer to sacrifice my car rather than undergo serious physical injury does not explain why the legal system should accept the sacrifice my car to save the life of a person I do not know. Rather, in the framework of a legal order that recognises its citizens as autonomous individuals with rights and duties, there is no margin for holistic entities, even in situations of necessity. Therefore, the mission of criminal law is not to protect and maximise the interests of a non-existent macro-individual, but to ensure a harmonious, fair, and stable separation of the spheres of freedom over which each person exercises sovereignty.Footnote 34

Thus, the above means that breach of a right is not justified simply by the advantages it might bring to a third party or society as a whole, however important they may be. The institution of justifying necessity cannot be based on a negative utilitarian principle, either. In agreement with a large and growing doctrinal sector, it is better to say that an aggressive state of emergency (regulated, for example, in § 34 of the German Criminal Code)Footnote 35 is based on a general principle of solidarity. A general duty of solidarity arises from this principle that, with strict limits and on an exceptional basis, forces individuals occasionally to take on quasi-official roles and tolerate certain minor interferences in their own legal sphere (necessity), or aid their fellow citizens in need (general duty to assist) when this protects substantially weightier interests.Footnote 36 In any case, the argument of the more important interest plays a subordinate rather than a fundamental role. The (substantially) greater significance of the endangered interest is a prerequisite for a duty to tolerate based on solidarity, but is not a fundamental normative principle.Footnote 37 The cause of justification as a defence in criminal law is not, therefore, an embodiment of the general principle of conflict resolution in law, but a completely irregular institution in the framework of a liberal legal system that authorises a citizen to shift the harm onto someone who is not responsible for it, as an exception and under very strict conditions.

However, assuming that there is a need to resolve conflicts (relevant to criminal law) in which a self-driving car may be involved, based on the system of criminal justifications, it is obvious that algorithms cannot be configured on the basis of the principle of the lesser evil. Moreover, the correct legal solution to the problem must be found in a theory of deontologically oriented justification. The next section deals with this by trying to answer the five dilemmas given above through an understanding of the principalist defence of the theory of justification in criminal law.

5 Crash Algorithms and the Theory of Justification in Criminal Law

The fundamental problem tackled here is not a question of how to assess the programmer or user when a self-driving car breaks the law, but how to provide clear guidelines to car programmers on conflict resolution. This means determining a “standard of behaviour” for the car, not assessing the legal liability when it strays from the regulatory standard. This assumes that it is possible to provide a legally acceptable outcome to any dilemma, without needing to resort to arguments for exoneration in order not to punish whoever takes responsibility for the solution found for the dilemma caused by the car.Footnote 38 Although solving any imaginable conflict in the praxis will always depend on technology, the technological problem is put aside below in order to provide and substantiate the ideal regulatory solution for each of the five dilemmas described above.

(1) As the result of a sudden brake failure, a self-driving car can only save the life of its sole occupant (O) by forcing the car onto the pavement where a pedestrian (P) is walking, who will certainly die from the crash.

What is the legally expected conduct of the car? Let us assume that, like most of the prospective passengers in an autonomous car (O), is unwilling to be sacrificed for (P). To simplify this problem and the other dilemmas discussed here, let us also assume that there is no special relationship between the two potential victims, or, to give an example, (O) is not (P)’s father. The question posed here is one that the theory of justification in criminal law has long been trying to answer: under what conditions can a person in an emergency situation commute the evil threatening him to another person? There are two possible answers to this question. In the first place, the principle of autonomy (freedom of organisation-responsibility for the consequences), as the parameter governing the whole grounds for the criminal justification system, permits an evil threatening one person to be transferred to another person who is fully responsible for having caused the threat (self-defence), or to the person who is in part responsible (defensive state of emergency).Footnote 39 However, in our example, it is clear that (P) had nothing to do with the danger threatening (O). Moreover, in accordance with the logic of the autonomy principle, the person threatened with the non-attributable harm must suffer injury (casum sentit dominus),Footnote 40 even if not deserved. The fact that each person must assume the responsible or accidental amplification or narrowing of the options for action forms part of the same concept of freedom (external). Nevertheless, as pointed out above, the principle of casum sentit dominus does not govern our legal system without exception. The principle of personally assuming the misfortune is, in turn, limited by a principle of solidarity, from which arises, with exceptions, a general duty to assist and the ground for justifying an interference in aggressive necessity. A person in need (or a third party giving aid) can transfer the impending evil to another person who did not cause it and with whom there is no special legal relationship, when it is impossible to avert the danger by turning to public emergency services, and the protected interest substantially outweighs the interests of others. The irregularity of solidarity in a liberal legal system is what explains the narrow margin for action permitted in an aggressive necessity. Simply belonging to the same legal community as the person in the necessity does not mean that he must tolerate significant life-changing infringements of his interests, nor interference in his spheres that affects the core of his personal dignity.

That said, since (P) has nothing to do with the danger in the emergency situation, nor is responsible for the mechanical failure, and takes no part in road traffic, it can be concluded from this example that (O) cannot legitimately demand that the hazard threatening him be transferred to (P). His interest does not substantially outweigh (P)’s; rather they are both equal, as identical legal assets are at stake (human lives), without it being possible to see relevant differences in the probability that the danger facing both could result in harm. This being the case, it can be concluded that, under no circumstances, can a self-driving car be programmed to mount the pavement to save its passenger’s life at the cost of the life of a pedestrian taking absolutely no part in road traffic and the dangers of the emergency situation, not even when the passenger is undeserving stricto sensu of the threatening hazard.Footnote 41 The position of responsibility by default due to the danger to (O) is what determines the car’s algorithm in this case.

(2) A self-driving vehicle is driving at the correct speed along an inner-city road when a group of about 10 young adults starts to cross the road recklessly and rapidly. To avoid an accident, the car swerves toward the hard shoulder. However, moments later, at the point of the hard shoulder where the car is predicted to hit the crash barrier, the sensors pick up an elderly cyclist, who will almost certainly be killed. The passenger of the self-driving car will not be significantly injured under any circumstance.

Unlike the case above, the interests of the passenger are not involved in the next dilemma. Looked at closely, it is case similar to those discussed under the topos of the conflict between duties in criminal law. The car is forced to swerve from its path so as not to break rules protecting prima facie the cyclist, but at the same time he must not change direction, or several prohibitions will be broken, which in this case protect the lives of the young people.Footnote 42 Given these facts, it is impossible for the liable party to fulfil all the grounds of obligations at stake, since they are mutually exclusive.Footnote 43 Thus, we are faced with a triangular dilemma in which the solution to the conflict must be found in a comparative analysis of the intersubjective relationship among the beneficiaries of the two duties i.e., the young people and the elderly man, based on principles of autonomy and solidarity.Footnote 44 This means using these patterns to decide which of the cases has a better right than the other in this specific conflict situation, so that the autonomous car respects the legal status of the one with the better right.

In the first place, it seems possible to state that none of the members of the group of youths can be fully attributed with causing the conflict situation, so we are not confronted by a case normatively equivalent to self-defence. The youths are not putting the cyclist’s life in danger by fully attributable aggression. Neither is it a standard (aggressive) necessity, in which a person under threat tries to deviate the undeserved hazard onto a third party who has absolutely nothing to do with the conflict. However, in this case, the youths are partially responsible for the emergency by having breached (negligently) a traffic regulation and caused the conflict. According to the rule of the defensive state of emergency,Footnote 45 the “younger sister” of self-defence, the person in danger can legitimately transfer the threat to the person responsible for it (in a weak sense), provided this does not protect an essentially lesser interest than the one injured. This means that the youths can only try to prevent the self-driving car from running them over if their interests substantially outweigh that of the cyclist. Is the interest of the elderly man essentially less than that injured in its defence?

Before being able to answer this question, it must be decided who is referred to by “interest”, and how these can be weighted. Broadly speaking, the statutory concept of interest is composed (on both sides of the scales) of three important fundamental factors: the interest affected (legal interest); the extent of the threatened harm; and the probability of the occurrence of the harm. Although the details of the assessment and weighting cannot be analysed here, it can be said that, firstly, the legal assets (life, physical integrity, property) can be compared relatively objectively, based on the penal framework established by the legislator in the penal code. Secondly, the extent of the expected harm in the specific case is also an important point, if it can be graded. Although freedom takes precedence over property in the abstract, the interest of the owner of a huge shopping mall on fire can be in concreto superior to that of freedom of movement for those deprived of it by the fire-fighters for a short time. In third and last place, the degree of danger looming for the specific asset is a decisive element in defining the balance of interests. Thus, the interest of someone taken to hospital in a critical state can be more important than the abstract interest of road safety, so that the interest of a driver exceeding the speed limit without specifically endangering anyone’s life can be justified. Although a mathematical approach to the various factors mentioned would be wishful thinking, their weighting must be taken into account, with explanations of the weight given to each one in the case in question.

However, the above reflections do not provide an answer to the key question of the dilemma in hand. This requires that two specific issues be addressed. First, it must now be decided, as some authors maintain, whether the different particular interests (or duties) affected should be added together when defining the content on each side of the weighting assessment scales. For a collectivist concept of necessity and conflict between duties, it is difficult to find a reason that they should not be added. If it is a case of maximising the community’s interests, the private nature of the interests is irrelevant and the end solution must thus be the one that ensures the best net outcome for the interests. However, from the individualist point of view of necessity and conflict between the grounds of obligation discussed here, the possibility of adding individual interests is not obvious. On the contrary: why should the elderly man’s right of defence change according to the number of people who have caused the hazard? To the best of my knowledge, in the framework of a liberal legal system that does not recognise anything like a “holistic entity”, nor a corresponding “holistic damage”, it is impossible to construct macro-interests to resolve conflicts, based on adding highly personal assets, either to be protected or harmed.Footnote 46 The fact that one entity appears together with another capable of being cumulatively protected does not alter the legal status of the various entities involved in the conflict situation.Footnote 47 This means that the legal situation of the elderly man is unaltered by the number of lives that may be saved. And this is not only the case when human lives are at stake, or when there is a conflict of equal interests, but also with highly personal conflicts of interest of a different type. The interest of a person in danger of drowning cannot go unprotected in law because of the accumulated interest of 1000 people not to suffer jellyfish stings.Footnote 48

In light of this, the weighting of interests is defined by the conflict between two human lives. For the purposes of deciding whether the interest of the cyclist is essentially less than that of each of the youths, the answer to a second and final question is all that is left to be decided: is the life of the elderly cyclist worth less in the assessment and weighting than that of each of the youths involved in the conflict situation? The answer must undoubtedly be “no”. In contrast to certain extreme utilitarian approaches, the unrestricted validity of the legal principle of not being able to assign a weight to a human life must be asserted,Footnote 49 which not only means that the quantitative accumulation of lives does not alter the comparative value of any of them (prohibition of summation), but that there is also no difference in level between two human lives (prohibition of grading or ranking). That means that not only is the life of the ailing elderly man worth the same as that of a recent Nobel prize winner, but that one life is worth exactly the same as 100 lives. In short, the mere qualitative or quantitative difference among human lives does not, under any circumstances, allow a dominance of pertinent interests as a basis for justifying the sacrifice of an innocent person.

Having reached this point, now is the time to resolve the conflict above. Regardless of which of the 10 lives at stake is put on one side of the scales, the assessment and weighting must necessarily agree with the statement that the cyclist’s interest is exactly equal to that of the youths. They are identical legal assets, with the probability of injury being the same in this case. As things stand, since the youths have negligently caused the conflict and their interests are not essentially more important than the cyclist’s, the self-driving car must change its path in the direction (in a state of defensive emergency) toward those responsible (in the weak sense) for causing the conflict.

(3) The life of the passenger of a self-driving vehicle can only be saved if the car enters the emergency braking lane. Unfortunately, the software immediately finds that two children are playing in the sand in the lane and will certainly be killed if hit.

Unlike the previous case, this one has a bi-dimensional conflict structure, in that the life of the passenger in a self-driving car can only be saved at the expense of the lives of two children. As far as I know, for the prevailing continental doctrine, the solution to this dilemma would not pose any problems in particular: assuming that the children are in no way responsible for the conflict and that the life-saving conduct for the passenger of the vehicle is active, i.e., altering its path toward the emergency braking lane; the self-driving car can only alter its direction and run over the children (breach of prohibition) if this protects essentially predominant interests.Footnote 50 That is to say that we would be facing a standard case of aggressive necessity. Given that the interests to be harmed are, at the least, identical to those protected, altering the course of the car cannot be justified in any way.

This method of analysing the case is based on the fundamental premise that the person in need or in danger is solely and exclusively the user of the vehicle. This person is the dominus, meaning the one to suffer misfortune by default, and due to the lack of a special motive justifying transferring the harm to a third party, must be the only one to suffer it. Although, why is the passenger in this case considered to be the dominus? This question has usually been obviated by criminal law, which focuses on analysing premises under which the dominus can actively transfer harm to a third party. Perhaps this lack of interest is explained by the ease with which a plausible way is usually achieved of determining who is to suffer harm by default. It can be almost certain that the person destined to tolerate it in the first instance is the hiker who risks freezing to death if he does not pull down a hut, or the shipwrecked person who can only save his own life by snatching the lifebuoy away from another unfortunate person. On balance, in criminal doctrine, the dominus appears to be the one judged to be in the most danger by standards of physical and natural probability. This standard attributes danger based on an analysis of hypothetical causality, governed by particular judgements of predictability according to the laws of nature. In any case, giving events that simply happen a normative status would be normal procedure in our legal system and essential when there are no other elements to assess attribution or distribution of the misfortune. When danger to life does not stem from any of the two involved in the necessity, with the lack of an additional criterion in decision making, the factual direction of the danger at the time of its inception (“destination”) would be the determining factor in establishing normative primacy among conflicting claims.Footnote 51

However, attributing the status of dominus on a purely factual basis sometimes leads to materially unsatisfactory results in the final distribution of the misfortune, on one hand, and, on the other, it supposes rejection of the possibility that the position of dominus could be attributed to several persons in cases where it would appear materially justifiable. This last point becomes perfectly clear in dilemma (4) above, in which a self-driving car has to choose between braking and saving a pedestrian, or not braking and saving a motorcyclist. Here, giving preference to the legal position of the motorcyclist, and accepting that the danger threatens the pedestrian who will be run over if the car does nothing, would be arbitrary, as we shall see below. On the other hand, the first problem is patently obvious in the following dilemma:

(A) has lost control of his lorry and is going downhill directly toward (B)’s SUV, as he is parking the car. Behind (B)’s car, in the middle of the road, are two children (C) and (D), who are calmly playing ball. When (B) sees (A) careering towards him, although he has noticed the presence of the two children, he stops parking and accelerates rapidly off the road, so that (A) kills the two children. If (A) had hit (B), both would have been seriously injured, but neither would have died. In that case, the children would has been unharmed.

If it is true that the victim of the hazard is the one facing the danger in purely natural terms, there is no doubt that removing the car and transferring the harm to the children is a positive unlawful act. The driver of the SUV would commit double homicide that is unjustifiable according to the rules in a state of aggressive emergency. However, this solution is quite rightly unacceptable to many. There is no reason for the law to always accept natural distribution in every case as the best solution imaginable in defining and specifying privatisation of the harm in conflict situations.Footnote 52 Contrary to how the doctrine usually proceeds, it must be said that attributing the position of dominus is based on an assessment of normatively established attribution.Footnote 53 The status is not, or not only, given because of a judgement on natural predictability, but rather because of a legally recognised freedom to act for all agents involved in the hazard who are able to deflect the harm. As far as concerned here, the fundamental idea is as follows: the position of dominus is only decided once the margin for action has been taken into account for all agents who may, hypothetically, legitimately intervene in the course of events. Only after discounting possible variations as the danger unfolds can the status of dominus be decided. Thus, determining the status can be broken down for analysis into three stages: in the first stage, the law states what the foreseeable injury from the danger will be; next, the legal system introduces into the analysis the types of legal conduct that all the agents close to the hazard can carry out, whether simply due to faculties, or obligatory conduct; only then can the third stage use the analysis to attribute the legal status of dominus to the default agent responsible for the harm.

Evidently, the central issue is to determine the reasons that certain conduct causing alteration of the course of a predictable injury is taken into account in law before deciding the position of dominus, affecting the assessment of distribution. However, it is plausible to say that deciding the position of dominus is based on the course of the expected injury, while discounting all foreseeable organisational acts that simply follow the legal process in a neutral way, “as planned” or more or less automatically.Footnote 54 Thus, certain active variations in the course of the cause, insofar as they can be considered foreseeable, ubiquitous or common behaviour, do not constitute a normative relevant transfer of harm, but are simple acts of exposing a third party to a pre-existing harm, that is to say, a simple, legally neutral intervention during the course of a hazard.Footnote 55 The neutral nature would be easy to decide in strongly regulated spheres, where the conduct of all those potentially involved is determined ex ante and the margin for arbitrary action is reduced to a minimum. Thus, the signalman who, following the pre-established railway contingency plan, switches a train from the line to prevent a head-on collision acts neutrally, even when a disoriented passenger is on the side track and will be killed.Footnote 56 The basis for an action being legally neutral is clearly more difficult when there are no foregoing criteria for regulations to determine the conduct of such agents. In the example given above, the position of dominus is held by the two children and not the driver whose presence protects them. Whoever removes his car from the path of a truck with brake failure is not transferring the harm onto a third party in either an aggressive or defensive necessity, as it does not fall to him to suffer the primary consequences of the harm. The two children are the domini as they cannot ask the driver not to take evasive action that could only be viewed as standard practice of general freedom to act in one’s own sphere.Footnote 57 This means that certain organisational actions, which do not entail a connection to or configuration of harm to others, but a determination of what is to be normatively expected, are guaranteed ab initio. This pre-existing right, based on free organisation of one’s own sphere that only evades or does not re-organise the hazard, explains why the law provides for the two children to be considered the domini from the outset.

Going back to the dilemma posed here, it can also be stated that the two children hold the position of domini from the outset. Insofar as there is a pre-established plan of action to prevent a necessity from arising,Footnote 58 factual alteration of the course of the cause shaping the plan does not constitute a normative relevant alteration of the hazard, but predictable implementation of the expectation of safety guaranteed by law. Even when the harm is not causally directed at the children initially, since they are in the exact place where cars with brake failure can predictably halt, they are, from the outset, liable to bear the harm by default in the situation described above. From a normative perspective, the two children are in a situation of necessity.Footnote 59 As things stand in this conflict situation, the self-driving car must turn into the emergency braking lane, even though it foresees killing two children. The passenger enjoys better legal protection in the conflict situation, which can only be superseded by that of the children when their interests substantially outweigh that of the passenger (state of aggressive emergency). Since we are faced with equivalent interests, the children cannot demand the harm threatening them be transferred to the passenger, and, therefore, they are liable to suffer the harm from the necessity.

(4) A self-driving car finds itself with the following dilemma: either it brakes to avoid running over a careless pedestrian who crosses the road suddenly, but the motorcyclist behind who is following too closely will die in the crash against the rear window; or the car does not brake and runs over the pedestrian but saves the life of the motorcyclist behind.

The present dilemma posed by the German jurist, Harro Otto, at the start of the 1970s (his car was still manual),Footnote 60 questioned the general primacy of the duty to omit, as opposed to the duty to act, within the framework of the general theory on conflict between duties. In accordance with this extended premise, in the face of a conflict between equivalent duties, the duty to omit always takes priority over that of acting, so that only when the duty to act (mandate) protects an essentially dominant interest, can it be implemented to the detriment of the duty to omit (prohibition). In this case, the breach of the duty to omit would be justified in accordance with the general rule on the state of aggressive emergency. Confronted with conflict between mandates and prohibitions protecting equivalent interests, the legal system gives priority to the prohibition, since any alteration to the status quo in question would require special justification that is impossible to find in a clash of identical interests.Footnote 61 Consequently, no surgeon can legally remove the kidney of someone in the waiting room of a hospital, not even if it is essential to save the life of a patient on the operating table; nor can a fat man be pushed into the path of a train in order to prevent hundreds of people from dying.

By defining “mandates” as those duties demanding action by the liable party, and prohibitions as those requiring no action, the premise just referred to would force the car in our example to run over the pedestrian, since the prohibition to end the life of the motorist has priority over the mandate to actively save (by braking) the pedestrian. This means that, if the car braked so as not to hit the pedestrian, the person responsible for programming the car, or the user, would have committed unlawful homicide. That this premise is unacceptable is shown by the multiple attempts by defenders of the general primacy of the duty to omit to reinterpret the prohibition to kill the motorcyclist, as though a second duty (maintaining speed) were in operation,Footnote 62 or the duty to brake the car actively as though it were a prohibition.Footnote 63 The aim is to avoid applying the rule of primacy of the obligation to omit, and to treat the conflict as a conflict between two mandates or two prohibitions, which in practice means that the liable party enjoys the faculty of deciding which obligation to follow, and breaching the unfulfilled obligation is justified. However, in my opinion, this course of action is unacceptable, as it skirts around the problem without answering the main question, which is why is it accepted in some cases that someone can save the life of one person to the detriment of another through active conduct. Otto’s example highlights the fact that distinction between mandates and prohibitions is an invalid criterion for resolving conflicts between duties, because stable dogmatic consequences cannot be deduced from the deontological nature of the grounds of the obligation or from the active or omissive nature of the required conduct.Footnote 64 This is basically because it is impossible to declare that obligations to act are, by definition, normatively ranked at a lower level than those of omission. It is not only that all mandates can be expressed as prohibitions, and vice versa, but it is also impossible to state that the obligations to omit have some sort of greater axiological-liberal entity than those of acting. The mother who kills her child by starvation is as criminally responsible as if she had actively strangled him, since both duties are normatively equivalent. Neither is it certain, in aggressive situations of emergency, that criminal law recognises a higher ranking for the duty to omit over the duty to act. An aggressive situation of emergency does not require the protected interest to primarily predominate because it is the only condition under which a breach of a prohibition is permitted, but because it is a strict condition to relativise the validity of the principle of autonomy due to reasons of intersubjective solidarity. Also, the lifeguard obliged by the duty to actively assist a bather can only justify a breach of that duty if it is substantially outweighed by another one.

However, assuming that, in resolving the conflict, it is irrelevant whether the conduct of the self-driving car is to act or omit and that, in this specific case, differences among the obligations should not be established in view of their normative type:Footnote 65 is there an important regulatory reason to grade opposing legal aims? Given the equal responsibility of the two victims in the conflict, it does not seem that the concept of deserving can settle the problem. And assuming that the interests in question are equal, the aggressive necessity rule is unable to resolve the conflict, since nobody can uphold a primarily dominant interest over the opponent. Lastly, the closure principle for the classic system of cause for justification (casum sentit dominus) is not a good criterion for settling the dilemma either: who must bear the misfortune in this case? This rule was designed to decide the so-called “secondary communities of danger”, in which a single victim transfers harm to another who had not been involved in the necessity until then. However, the case discussed here has a rather different structure. From a normative perspective, the two victims seem to be directly involved in the necessity, thus comprising what is now called a “primary community of danger”. Although cases are usually discussed under this label when there is a conflict of interests that is in danger of being lost, and it is still possible to save one party by promoting the loss of another,Footnote 66 what is true is that the motorcyclist and pedestrian, although physically and naturally not both having to die in the necessity, share an identical normative status toward the hazard; or, put another way, both are domini, without either being able to claim that they are innocent. The motorcyclist’s necessity cannot be understood without that of the pedestrian, and vice versa. From a normative viewpoint, they constitute a symmetrical community of danger in which saving the life of one is only possible at the cost of the life of the other. In reality, this is what happens in typical cases of conflict of duties to act, for example, where the father has to choose which child to save when both are in danger of drowning.

However, as stated by the majority of criminal law,Footnote 67 when two duties to act are in conflict, the self-driving car that has to decide which interest to protect in the framework of a primary community of danger, such as that described, will always be acting according to the law when it fulfils one of the two obligations in question. This means that, in the same way that a father is permitted to breach the obligation to save one child by saving the other in a conflict situation and is not transferring the harm unlawfully, nor illegally playing God, but simply acting in the only way that can be reasonably required in such a case,Footnote 68 the self-driving car in the example also has to decide who will suffer the abstract harm awaiting both victims, without either case constituting a transfer of harm to an innocent third party. Given that neither subject in the community of danger can reasonably ask to be considered blameless in the cause of danger, nor uphold a legally weightier argument, the final victim of the conflict, like the child in the classic case of conflict between two duties to act, will have to accept the solution decided by the car as a possible and unavoidable consequence of the unfortunate situation. In short, braking the car to prevent running over the pedestrian is not, therefore, unlawful homicide, and neither would not stopping and hitting the pedestrian crossing the road in front of the car. In this case, the legal system provides the “obligor” with the ability to settle the conflict as he sees fit, with the law accepting injury to either of the two interests or obligations guaranteed prima facie.

The above still poses a final relevant question in solving our dilemma. If both possible conducts are legally acceptable, so that the car enjoys something like a “right to choose”, the way in which a self-driving car reaches this decision has to be defined. There are two options to take into account. It is either the programmer (or the user) who, through making a subjective decision, pre-determines how to solve such conflicts, or the car is programmed to decide through objective decision systems (lottery).Footnote 69 Against this second option is the opinion that both the law and the concept of justice are perceived in our society as something too serious to be denatured in extreme situations through arbitrary or random decisions. No legal system can consent to one person playing, in the proper sense of the word, with the lives of human beings. However, it is true that where no decision-making parameter following classic models exists, and the alternative consisting in not making a decision has been rejected, randomness is seen as the best solution imaginable to solve undecidable conflicts involving self-driving cars.Footnote 70 In my opinion, this is due to two reasons. Firstly, because mechanisms for objective judgement decisions comprise a decision-making procedure that is strongly aware of the principle of equality, so that any rational agent would agree that he can be used in defence of his own interests.Footnote 71 Furthermore, a lottery ensures that deciding among legally equal aims is not done by an agent based on legally or morally unacceptable principles (xenophobic, racist, sexist, etc.).Footnote 72 It must be noted that, unlike what may happen in classic emergency situations, where the agent forced to decide may not have the time or material opportunity to resolve the conflict by drawing lots, a self-driving car can always decide how to solve the dilemma through an objective decision system. Second and lastly, it is true that random conflict resolution in this field is immune to the classic criticism of this mechanism in solving fatal cases of necessity. The fact that the car makes a random decision and automatically acts on it removes any risk of manipulation, both in the choice and implementation of the solution, so that the victim is perceived as the result of an objective force, having enjoyed an option to be saved in a lottery in which all desires are considered equal.

In conclusion, in this dilemma, the self-driving car must be programmed to make a purely random decision to resolve the conflict. When faced with identical aims, a lottery is the best mechanism to make an unavoidable decision.

(5) A self-driving car in the right-hand lane of a motorway sees that the only way to avoid hitting from behind a motorcyclist, who is not wearing a helmet, is to cross into the second lane and hit another motorcyclist who is wearing a helmet and modern protective gear. If the motorcyclist with no helmet is hit, he will die, while, if the second one is hit, he will only suffer mild injuries. The occupant of the car will be unharmed in either case.

This last dilemma poses few problems for those proposing cars should use act utilitarianism, as a theory, in their programming. A self-driving car must always hit the better-protected motorcyclist in order to minimise the loss of social usefulness. A rule utilitarian could reach a different solution. Despite the fact that, in this particular case, social utility causes the motorcyclist with a helmet and protective gear to be hit, programming cars in this way would remove the incentive to protect oneself, which would lead to a net loss of social utility in the long term. The solution to the case based on the principalist view used to tackle conflicts in this paper is more complex. On one hand, in agreement with the idea of deserving and responsibility for one’s own actions (autonomy), the fact that the greater fragility of one of the motorcyclists arises from breaking the rule on self-protection seems obligatory to take into account in resolving the conflict. On the other hand, in agreement with the idea of solidarity as an axiological limit fundamental to the principle of autonomy, the basic difference among hazards threatening one or other of the motorcyclists cannot be ignored when programming the car.

Be that as it may, we are once again faced with a three-dimensional dilemma in which the self-driving car has to decide which of the two prohibitions in the conflict to fulfil. The car is “obliged” not to hit the motorcyclist not wearing a helmet, while at the same time not being able to crash into the one wearing a helmet. Fulfilling both obligations at once is impossible. Assuming that the (normative) relationship between the car, as the “obliged” agent, and each of the motorcyclists is identical, the solution to the conflict depends on an analysis of the intersubjective relationship between both motorcyclists. As things stand, the first question that has to be clarified here is whether one of the two motorcyclists is responsible for the conflict situation. Despite one of the two motorcyclists not wearing a helmet in violation of road safety rules, the fact is that this infringement cannot be pointed to as the cause of the necessity to the other in the true sense of self-defence or a defensive state of emergency. Moreover, this is really a situation of secondary community of danger in which the motorcyclist without the helmet appears as the dominus, and the second motorcyclist appears completely separate from the danger. Unlike the previous example, this is not a situation where both motorcyclists comprise a normative community of danger, since the existence of two lanes supposes, to a certain extent, that there are standard rules for the expected outcome, so an unjustified change of lanes would now involve an ad hoc unacceptable reorganisation of the course of injury, or, put another way, an unacceptable breach of obligation to act predictably in a case of necessity.

Nevertheless, the fact that the motorcyclist who is riding without a helmet could be considered the dominus does not mean that he necessarily has to die in the case of necessity. An aggressive emergency situation generally permits the person competent by default to transfer the danger to someone who is not involved in it, when the protected interest substantially outweighs the one interfered with. However, this strict scale of difference of interests becomes more so when victims are responsible for placing themselves in the necessity, either by breaching rules of conduct or by simply not self-protecting as required. Therefore, anyone who responsibly puts himself in a situation of necessity still has a right to be rescued, although restricted, since his stance on solidarity towards another person is reduced in the proportion to which he is guilty of causing the conflict.Footnote 73 Although there is a risk of over-generalisation, it must be said that the victim who has (as a responsible person) caused his own necessity can only transfer the threatening harm to a third party if this protects a significantly greater interest than the one damaged. The fact that one substantially outweighs the other is not sufficient here.Footnote 74 Therefore, in the classic example of the mountaineer who starts an expedition without suitable equipment, he will be able to justify pulling down a mountain hut if it can be used to save his life, but not when there is a moderate risk of suffering non-serious physical injury.

Although, is the case concerning us now really one of causing one’s own situation of necessity? Did the motorcyclist without a helmet really cause the necessity? It is possible to say that breaching a rule of conduct or the self-protection requirement can only restrict the right of defence in an aggressive necessity if there are substantial normative links between the said cause and the subsequent necessity. An example is: the mountaineer who ignores the weather reports and starts climbing the Matterhorn has full right to be rescued when, following an accidental fall, he breaks his ankle and is in a situation of necessity.Footnote 75 His unfortunate situation is not derived from failing in his duty to protect himself, so his victim’s rights do not deserve to be reduced. It could also be thought that, in this example, there is no sufficient normative link between the necessity and breach of the motorcyclist’s obligation to wear a helmet. However, as far as I see, our problem is somewhat different. Although the motorcyclist without the helmet does not cause his own necessity, by breaching a rule of conduct, he is exposing himself to greater injury than would be expected in the same situation if he had not broken the law. This means that infringement of the duty to self-protect does not determine the necessity, but the severity of the expected danger. Faced with this scenario, I believe there are basically three ways that might resolve the conflict.

The first would not take into account the fact that the victim chose to go unprotected, when ruling on the conflict. Given that the interest of the motorcyclist without a helmet substantially outweighs that of the second motorcyclist, the car must cross lanes and hit the latter, who would hardly suffer any injury. However, this approach cannot explain to the law-abiding motorcyclist why the second motorcyclist’s breach of the duty to self-protect has to lessen his normative status. If the victim were wearing a helmet, affecting the interests of the second motorcyclist would be unlawful, as it would be a case of equal interests (slight injury). The negligent act of the motorcyclist without a helmet would improve his normative status in the conflict, which would be difficult for the injured motorcyclist to accept,Footnote 76 especially within the framework of a legal system that adopts the principle of deserving as a fundamental ratio in conflict resolution. The second possible solution is to define the interests at stake as though the motorcyclist without the helmet had not breached the obligation to self-protect. In this case, these interests would be seen as equal, since, in the hypothetical case that both motorcyclists were wearing helmets, they would only be in danger of suffering slight injury. The motorcyclist without a helmet cannot, therefore, transfer the threat of harm to the one properly protected. This way of seeing things, although consistent with the principle of deserving, seems to over-restrict the law-abiding motorcyclist’s obligation of solidarity and condemns the negligent motorcyclist to die. If, even faced with fully attributable aggression (self-defence), a certain margin in the principle of solidarity is, quite rightly, recognised in the resolution of the conflict (ethical and social restrictions on self-defence),Footnote 77 it is plausible to state that, in these cases, the victim should also enjoy a certain right of defence. The third and last solution is, in my opinion, the most feasible and consists in recognising a limited right of necessity for the motorcyclist, the same as with standard cases of provocation. Transferring the harm to the motorcyclist wearing the helmet would only be legitimate when it would protect a vastly greater interest than the one harmed.

As things stand in our example, the self-driving car would must run over the motorcyclist wearing a helmet, as this would protect a vastly greater interest (life) than that damaged (minimum remediable loss to physical integrity). The chance that the negligent motorcyclist could come out of the conflict unharmed, however, does not seem to me to be a significant disincentive to go without protection. In the first place, because such conduct would still be unlawful and punishable. Secondly, because the motorcyclist who is the victim of justified injurious action in a situation of necessity still has a civil claim to compensation from the beneficiary of the action. Thirdly and lastly, insofar as the injury threatening the motorcyclist with a helmet might be minimally significant, such as a bone fracture, the car could not transfer the harm to him, so the motorcyclist not wearing a helmet would perish.

6 Conclusion

Programming crash algorithms of self-driving cars poses an ethical and legal challenge that is paramount. This paper has defended the position that ethical egoism is not an acceptable benchmark for programming autonomous cars in difficult situations, since a legal system cannot use the mere interest of the implementing person as a model for correct behaviour. Neither is the utilitarian approach convincing. Within the framework of a liberal legal system that recognises humans as free agents who have rights and duties, maximising the function of social utility does not justify harmful interference into a person’s legal sphere. There is no holistic entity whose interests have to be maximised, even in a situation of necessity. Thus, this project has turned to the criminal theory of justification, understood from a deontological perspective, to determine how to program crash algorithms in autonomous cars in five of the dilemmas proposed in this discussion. If it should prove that the penal system of causes for justification provides the most complete, exhaustive, and well-founded battery of arguments for solving conflicts in situations of necessity, it is also obvious that the dogma of self-defence, of (defensive or aggressive) states of emergency and conflict of the grounds of obligation, must be the fundamental point of reference in the forthcoming legal discussion of how to program self-driving cars in dilemmatic situations.