Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

Engineers are responsible for the technical artefacts they produce. The claim seems straightforward enough, yet how far does this responsibility go? Both on a theoretical and a practical level, issues are entangled.

On a theoretical level, responsibility can be approached in two different ways. The merit-based approach ascribes responsibility to agents on the basis of their actions, focusing on what it means for an agent to be responsible: whoever performs a certain action merits, or deserves, a certain reaction. The consequentialist approach ascribes responsibility to agents so that it will lead to the desired effects, focusing on when an agent should be held responsible, namely: if he or she is in the best position to make those desired effects happen, or avoid undesired effects (Eshleman 2004). Both views can affect engineers: on the one hand, they are causally responsible for the technical artefacts they produce. This means they can be held morally responsible and be praised or blamed for those artefacts and the effects they produce. On the other hand, they are in a good position to improve aspects of both single technical artefacts and extensive technical systems, so it makes sense to ascribe certain responsibilities to them in advance. Sweden, for example, has introduced a policy for road transport systems where system designers are designated “ultimately responsible” for traffic safety. This does not mean that responsibility for traffic safety is taken away from individual road users, but rather that the system designers are encouraged to take measures “so that the mistakes and errors of some individuals, regardless of who is considered to be responsible, do not have fatal consequences and that such mistakes and errors will not be committed with the same frequency” (Fahlquist 2006, p. 1118).

On a more practical level, just as meaning and application of the term “responsibility” has shifted over time (Mitcham 1987), ideas about the responsibilities of engineers have changed as well. A century ago, for example, ethical codes for engineers did not mention responsibility for the welfare of the public. While this has lead Mitcham and Von Schomberg (2000) to claim that this responsibility was considered less important than that of loyalty to the firm and customer, Davis (2001) makes a good case against this interpretation, though only later responsibility for the welfare of the public was explicitly made of paramount importance. The American National Society of Professional Engineers’ Code of Ethics, for example, states that the engineer should “hold paramount the safety, health and welfare of the public” (NSPE 2007), while the British Royal Academy of Engineering has issued a statement of ethical principles which mentions that engineers “work to enhance the welfare, health and safety of all whilst paying due regard to the environment and the sustainability of resources” (Royal Academy of Engineering 2007). While laudable, these statements have their drawbacks. It is not always clear how these requirements translate to engineering practice and Broome (1989) has argued that certain risks in engineering are unavoidable.

This chapter will give a merit-based account of responsibility, focusing on who is responsible for a technical artefact, the engineer or the user, and under what circumstances this responsibility is transferred. While the responsibilities of the engineer have increased over time, responsibilities for what the artefact can or cannot do and for the consequences of using the artefact remain “core responsibilities”: these are the responsibilities that will be analysed here.

This does mean that because of its specific scope, my framework allows for an engineer to make a torture device and successfully transfer responsibility for that device to a user. Isn’t this letting the “evil” engineer off too easily? It is important here to keep in mind that not all responsibilities of an engineer are transferrable. Next to the responsibility for specific artefacts, engineers also take on more general, non-transferrable responsibilities when they enter the profession like those established in ethical codes. These allow us to say that any engineer who would knowingly and willingly build torture devices would always behave irresponsibly, as this is a violation of the engineers’ responsibility for the safety, health and welfare of the public. Likewise, being responsible for the safety of a product might entail recalling it when tests bring unnoticed defects to light, and responsibility for the environment might translate into offering opportunities for recycling used products. For these aspects, the engineer remains responsible for the artefact during its complete lifecycle.Footnote 1

What are the conditions under which engineers can actually transfer transferrable responsibilities for the artefact to the user, and when does this transfer fail? This chapter will clarify the conditions of responsibility transfer between engineer and user by combining two existing theories into a new theoretical framework. The first theory concerns control over and responsibility for actions by Fischer and Ravizza (1998). The second theory is the use plan theory for technical artefacts by Houkes and Vermaas (2004). Fischer and Ravizza are not interested in applying their theories to artefacts and engineers, while Houkes and Vermaas are not concerned with responsibility. I will argue that (the communication of) use plans can account for the transfer of control over an artefact, and can thereby affect a transfer of responsibility for the artefact from the engineer to the user.Footnote 2

Four caveats are in place here. First, it is not the purport of this chapter to analyse the distribution of responsibility within a specific case in engineering. The combined theoretical framework will be its main focus, though I will apply it to a test case to show how it could function in practice.

Second, this chapter is about moral responsibility for artefacts, not legal responsibility or liability. However, assuming that legal responsibility at least overlaps with moral responsibility, the findings of this chapter would be relevant for legal responsibility as well.

The third remark concerns the role of the individual engineer. My account uses a simplified model of engineering where one engineer creates a product for one user. This does not fit well with real practice, where teams of engineers often work together, embedded in institutional frameworks, and users might also be groups or institutions. As my framework is concerned with the transfer of control and responsibility between two parties, however, I will adhere to the simplified model of one engineer/one user for the sake of practicality. My analysis will show on which side the responsibility lies. A theory describing the distribution of responsibility within organizations could then be used to more accurately pinpoint the person(s) responsible.Footnote 3 The test case will show how the theoretical framework works in a more complex situation.

Finally, I mentioned that engineers are “causally responsible for the artefacts they produce”. While this may be trivially true – without the engineer, there is no artefact, engineers may not be responsible for all aspects of the artefacts they create, as they are themselves dependent on raw materials, machinery with which to build the artefacts, etc., which may all be delivered by third parties.Footnote 4 This, however, does not affect my framework, which only determines whether responsibility lies on the engineering or the user side. If in a certain case responsibility is shown to be on the engineering side, it could well be that responsibility turns out to lie with the supplier: maybe the raw materials delivered lacked a promised quality. It would be interesting to see if the framework could be adapted to work on different levels, for example to distinguish between the responsibilities of engineer and supplier, but such a project lies outside the scope of this chapter.

I will begin this chapter by constructing the theoretical framework. I will summarize the theory of control and responsibility by Fischer and Ravizza and the use plan theory by Houkes and Vermaas. Next, I will show that two assumptions have to be made to connect both theories, and that a strong case can be made for both. After that, I will apply the framework to a test case: the Abcoude dosing lock.

2 Responsibility and Control

In Reponsibility and Control, Fischer and Ravizza aim to “explore and develop systematically the conditions of application of the concept of moral responsibility.” (pp. 9–10).Footnote 5 More specifically, they examine responsibility for actions, omissions and the consequences of those actions and omissions. While I will apply their account primarily to cases where actions have negative consequences, I agree with Fischer and Ravizza that responsibility can also elicit praiseworthiness, e.g. when you are responsible for doing good (p. 2). Also, the book of Fischer and Ravizza is primarily situated in the determinism/free will debate: I will extend their theory in another direction by applying their concept of responsibility to artefacts.Footnote 6

Fischer and Ravizza start by examining the two conditions first formulated by AristotleFootnote 7 under which agents have no moral responsibility for what they do. The first condition is ignorance: if you are unaware of what you are doing or what the consequences can be, you are not responsible for those doings and their consequences. Of course, you can be responsible for being ignorant, reckless or failing to investigate what the possible consequences of your actions could be. The second condition is force: you are not responsible when you can not act freely, or more specifically, when you can not control your behaviour (p. 13). It is this condition Fischer and Ravizza focus on when developing their account of responsibility.Footnote 8

It is important to note that Fischer and Ravizza distinguish two kinds of force: resistible force like threats or coercions and irresistible force. Only the latter is strong enough to exempt someone from moral responsibility. This can happen either when the mechanism issuing in the action is not “the agent’s own”, for example when a brain implant controls my behaviour, or when I fail to be at least moderately responsive to reasons, for example when I am hypnotized.Footnote 9 If I am “merely” threatened, say, a robber puts a gun to my head and demands my money, none of these conditions are met: I decide to give him my money because my desire to remain alive makes for a good reason to do so. I thus exercise control over my action of giving the robber my money, and am responsible for that action. Fischer and Ravizza mitigate this conclusion by stating that while I am responsible, given the circumstances I will probably not be considered blameworthy.

As control seems to be an instrumental notion to understand moral responsibility, Fischer and Ravizza set out to examine it. They distinguish two different kinds of control over actions: guidance control, which involves an agent’s freely performing an action and regulative control, which involves the power to exercise guidance control over an action and the power to exercise guidance control over another action instead (p. 31). Usually, both forms of control are not clearly separated. Fischer and Ravizza give an example where they are: that of Sally taking driver’s lessons in a car with dual controls, one for her and one for the driving instructor (p. 32). Imagine that the car approaches a right turn. Sally steers the car to the right. In so far as she guides the car, freely performing the action of steering to the right, she can be said to have guidance control over her action. However, if she would have turned left, or not turned at all, the instructor would have intervened and steered the car to the right instead. This means Sally has no regulative control, as she could not have caused the car to do anything else than go to the right. Only the instructor has regulative control. This does not mean that Sally is not responsible for her action: she is, because she freely performed it. Furthermore, both bear full responsibility for the consequences: Sally has guidance control over (and is responsible for) the consequences because she has guidance control over her action, and in this case it is reasonable to expect her to know what the consequences of that action will be (p. 121). The driving instructor is responsible for the consequences because he is responsible for the omission of actions: if he does not intervene, that should only be because he approves of Sally’s actions. Since he could have intervened and prevented the consequences, he is also fully responsible for their occurrence.

One more question needs to be answered here: where does responsibility come from? Fischer and Ravizza claim that initially, you have to take responsibility. They view the taking of responsibility as a vital step in the life of a human being. By recognizing yourself as an agent, realizing that the mechanism that issues in actions is “your own”, you take responsibility for those actions (p. 210).Footnote 10 Whoever does not do that will not be recognized as a person, but rather as “a distasteful object or a dangerous (or annoying) animal.” (p. 213). Dennett (1984) uses an engineering metaphor to illustrate the concept of taking responsibility: “I take responsibility for any thing I make and then inflict upon the general public... (…) I have created and unleashed an agent who is myself; if its acts produce harm, the manufacturer is held responsible” (p. 85).

Fischer and Ravizza and Dennett talk about taking responsibility as a human being. Engineers, however, have special responsibilities over and above their basic responsibilities as human beings; this is emphasized by the concept of “role responsibilities”. According to the idea of role responsibilities, we each play many different roles in our society: we are not only humans, but also colleagues, parents, supervisors, customers, etc. Each role is accompanied by specific responsibilities which you have to take and internalize in order to properly fulfill that social role. In this light, “being an engineer” can be seen as adopting a certain social role which requires specific training and commitments.Footnote 11

3 Use Plans

While Fischer and Ravizza give a nice example of how both forms of control work, they take it for granted that the car works as it should and that Sally and the instructor have adequate knowledge of how it works.Footnote 12 Also, there is no mention of the skills needed to drive a car: it seems that guidance control involves at least a basic skill in the action, but this is not explicated. This might be unproblematic for everyday actions like walking and pressing buttons, but especially in the operation of complex technical artefacts skills are important. Lastly, there is no mention of cases in which the engineer might be (partly) responsible, for example when the car would malfunction. How do engineers transfer knowledge about artefact functions to users? Which skills can they assume, and which should they mention explicitly? To deal with these questions I will now turn to the use plan theory (Houkes and Vermaas 2004; Houkes 2006).

The use plan theory investigates the nature of knowledge of artefact functions. Houkes and Vermaas argue that it is better to speak of knowledge of artefact use, and that this knowledge is different from “classical” declarative or procedural knowledge. They call this knowledge “use know-how”, which has two components: “knowledge that a sequence of actions leads to the realisation of a goal, and the skills needed to take these actions” (Houkes 2006, p. 105). The first component is called knowledge of a use plan.

Artefacts are designed for embedding in use plans (Houkes et al. 2002; Houkes and Vermaas 2004). Moreover, successful design requires the construction and communication of at least one use plan. This does not mean that such an artefact would only have one use plan: if a sequence of actions with that artefact will lead to the realisation of a certain goal, then that sequence is a use plan, whether it is explicated by the engineer or not. What it does mean is that the use plan constructed and communicated by the engineer defines the “standard use” of that artefact (Houkes and Vermaas 2004).

Use plans can be communicated from engineer to user in a variety of ways: textually (via user’s manuals or written instructions), through pictures or icons, hardwired in the design itself, etc. The engineer needs to communicate more information together with the use plan, however. According to Houkes and Vermaas, “in a rational plan, the user believes that the selected objects are available for use – present and in working order – that the physical circumstances afford the use of the object, that auxiliary items are available for use, and that the user herself has the skills necessary for and is physically capable of using the object” (p. 59). If any of these factors might not be a matter of course, the user should be alerted to it. When communicating the use plan of a car with a manual transmission in a country where automatic transmissions are dominant, for example, it should be mentioned that operating this kind of transmission requires a specific skill. That some countries do not allow you to drive a car with a manual transmission when you have taken your licensing test using an automatic transmission is information important for the social and legal context of driving, but it does not have to be communicated together with the use plan, as it is not necessary for “using” the car itself: it is physically perfectly possible to drive a car with a manual transmission without a corresponding license – as long as you possess a minimal skill in operating manual transmissions.

While reasonable for practical use, Houkes and Vermaas’ requirements can become problematic. After all, there is no clear boundary between what can be regarded as “common knowledge” and what is important or specific enough to mention together with the use plan. To avoid claims of legal responsibility, engineers prefer to include too much over too little. On the other hand, large manuals full of warning signs may deter rather than invite potential users, and the list of possible conditions influencing the operation of the artefact is endless: at some point, even the thorough engineer has to rely on the “common knowledge” of the user. It might also be argued that the user has a certain (role) responsibility to acquaint herself with the intended use plan of the artefact and actively seek out how it is supposed to work. However, this does not diminish the responsibility of the engineer to communicate the use plan, for she still needs to make the information accessible to the user in some way. In practice, what should be mentioned in a use plan seems to depend on a number of factors, for example the risk involved in improper use of the artefact.

Some knowledge might not be necessary to use an artefact, but might help to enhance its lifespan or efficiency. Knowledge of how efficient your engine is at specific speeds is not necessary to drive a car, but it can help you drive more efficiently, using less fuel, saving money and reducing emissions. This “supererogatory knowledge” does not have to be communicated with any use plan as long as it does not significantly affect the artefact’s functioning. However, in so far as engineers have to hold paramount the safety, health and welfare of the public, they might be said to be responsible for communicating these aspects of artefact use as well.

What the use plan theory does not do is investigate what this transfer of knowledge about artefacts means for the moral responsibility of both engineers and users. This issue will be addressed in the next section.

4 Combining Approaches

Here, I will argue that the theories of Fischer and Ravizza, and Houkes and Vermaas can be combined, and that communication of use plans can transfer responsibility from engineer to user by transferring guidance control over an artefact. For this combination, however, both theories need to be extended. The theory of Fischer and Ravizza deals with control over actions; in order to be relevant for engineers, it should include control over (and thereby, responsibility for) artefacts as well. The use plan theory needs to be extended to show that use plans not only transfer a procedure, but control as well to enable the transfer of responsibility. In this section, I will show that both extensions follow naturally from the existing theories.

What do we mean by “controlling an artefact”? Fischer and Ravizza seem to assume this can have two different meanings when they state about their example: “Sally controls the car, but she does not have control over the car (or the car’s movements)” (p. 32). Dennett (1984) makes a similar distinction in an example about an airplane: “…The pilot not only strives to control the plane at all times; he also engages in meta-level control planning and activity – taking steps to improve his position for controlling the plane by avoiding circumstances where, he can foresee, he will be forced (given his goals) to thread the needle between some Scylla and Charybdis” (pp. 62–63). I will argue that both forms of control correspond with both forms of control over action.

The first meaning of “controlling an artefact” is really a subclass of control over actions, namely control over actions performed with artefacts. I control my car in so far as I exercise guidance control over the actions I perform with it.

The second meaning of “controlling an artefact” is more similar to exercising regulative control over actions. Artefacts are not passive recipients of human action, but can exhibit behaviour of their own. “Controlling an artefact” can then be seen as ensuring that the behaviour of the artefact does not interfere with the agent’s goals and keeps within certain limits of for example safety and sustainability. More specifically, by exercising regulative control in such a way, the agent will not end up in a situation in which the exercise of guidance control becomes impossible. For example, if I drive in a car and see an icy road ahead, on which I know I might go into a skid and lose control over the car, I can take measures to prevent this from happening such as putting snow chains on the tires. By exercising guidance control over the action of putting snow chains on the tires, I ensure that I will remain able to exercise regulative control over the car. Apparently, both forms of control over artefacts can be reduced to control over actions.Footnote 13

This elaboration enables us to make more explicit what is meant with “being responsible for an artefact”. I will continue my parallel between actions and artefacts here. Fischer and Ravizza distinguish between three main forms of responsibility: for actions, for omissions and for the consequences of those actions and omissions.Footnote 14 I will treat responsibility for artefacts as being of a similar threefold nature, including responsibility for actions performed with artefacts, omissions of actions with artefacts, which may either be not performing an action with the artefact, or not intervening in the behaviour of the artefact, and the consequences of those actions and omissions of actions with artefacts.

Now to the second question: do use plans transfer control? To be more precise, I will rewrite this question to: does the communication of a use plan (for an artefact) transfer guidance control (over that artefact) to an agent? If we write the question out with the definitions of use plan and guidance control, we get the question: does communication of a sequence of actions that leads to the realisation of a goal transfer the ability to freely perform those actions to an agent?

The problematic part of this definition is the word “freely”. It would be bizarre to assume that use plans enable someone to act freely if he couldn’t do that in the first place. To provide for this, I will assume that Houkes and Vermaas make the implicit assumption that the agents to whom the use plan is communicated enjoy freedom of action. Indeed, by definition an agent structurally without freedom of action could not be considered an “agent” at all.Footnote 15 This assumption will account for the most problematic part of the question.

Apart from the freedom condition, there seems to be a second point of attention, namely that communication of an action does not necessarily lead to the physical or mental ability to perform that action.Footnote 16 I might be told how to drive a car, or ride a bike, or play golf, but that in itself is not sufficient. I also need to practice, or in other words, to gain procedural as well as declarative knowledge. This problem is removed by Houkes and Vermaas’ requirement that it should be mentioned whether specific skills or abilities are needed for the execution of a use plan, like the driving skills necessary to operate a car. In short, if freely performing an action, and thereby exercising guidance control over it, might be difficult or require training for an agent, it should be mentioned together with the use plan. The “ability” in the question is thus reduced to not so much a physical or mental ability, as well as the ability to perform actions “under a certain description.”Footnote 17 For example, if I know nothing about cars, my actions with them are limited to “turning the wheel” and “pressing the pedal on the right”. Knowledge of the use plan also allows me to intentionally perform the actions of “steering to the right” and “accelerating”.

The theoretical framework thus obtained can be formalized as follows:

An engineer E transfers moral responsibility for an artefact A to a user U if:

  1. (I)

    E is morally responsible for A. Footnote 18

  2. (II)

    E successfully communicates at least one rational use plan P for A to U. Footnote 19

  3. (III)

    P can (under normal conditions) physically be executed with A.

  4. (IV)

    U is able to execute P.

  5. (V)

    U has access to A.

Conditions (I)–(III) need to be met by the engineer to enable the transfer of responsibility. Conditions (IV)–(V) need to be met by the user in order to accept that responsibility. In other words, when an engineer transfers responsibility to a user, the engineer of an artefact enables the user to take responsibility for that artefact.Footnote 20 The user thus gains a (forward-looking) responsibility for that artefact, so that if her actions or omissions of actions with that artefact cause harm, she can be held (backward-looking) responsible.

I will illustrate this framework with an example. Suppose an engineer E wants to make a car A for user U and also wants to transfer responsibility for this car to user U.Footnote 21 First, E has to take responsibility for A. Within our society she has already (implicitly) done so by becoming an engineer, designing and constructing the car and providing it to U. By accepting her job, her “role in society”, she has accepted her role responsibilities, which include in this case being responsible for the artefacts she designs and constructs. Because of this, E is morally responsible for the car. Condition (I) has been met.

Second, E needs to communicate at least one rational use plan P for the car to U. E has several ways to do this and she will probably use more than one of them. A user’s manual might highlight specific features of the car, the car itself is made to “suggest” certain actions: pedals are made to be pushed, the gearstick can only be moved in such directions as to switch to certain gears and so on. Part of the use plan will probably be communicated via intermediaries such as driver’s schools, and commercials and advertisements can also alert the user to aspects of the use plan. Because different users might already know different parts of the use plan, some information might be redundant to some users, but this is less problematic than an incomplete communication of the use plan in which case condition (II) might not be met. By these different channels, a successful communication of at least one rational use plan P for the car takes place.

Third, it must be physically possible under normal circumstances to execute P with A. What normal circumstances are, depends on what the artefact is made for: for a space shuttle, for example, they are quite different than for a car. This condition precludes the transfer of responsibility when the artefact doesn’t function as stated in the use plan, for example, when it would malfunction due to a construction failure that could reasonably have been prevented. When constructing the car, E probably performs various tests and safety checks, which ensure condition (III) has been met, before she gives U access to the car. In so far as these tests and checks are required by law, she would not only risk accountability by skipping them but also liability for defects in the car and the consequences thereof.

There are two more conditions to be met. (IV) states that the user needs to be able to execute P. P assumes certain sensorimotor skills, but its communication should include the mention that minimal driving skills are required to execute P. U is thus warned that she cannot take responsibility for (driving with) the car unless she has the driving skills needed in order to execute P. Assuming that U is rational and has no overriding reasons to try and use the car unskilled, she will heed this warning and not try to use the car until she has learned how to drive and is thus able to execute P.

Condition (V), finally, states that U has to have access to the car: she should be physically able to get to the car and drive it onto the road in order to become responsible for it.

In this section I have argued that control over actions can be extended to control over artefacts as well, and that use plans can transfer control and thereby responsibility. I have illustrated this with an idealized example of how the transfer of responsibility for a car can occur. In the next section I will bring up a real-life case to further test the theoretical framework, clarifying the distribution of responsibility and identifying issues where further research might be necessary.

5 The Abcoude Dosing Lock: A Test Case

Responsibility for traffic safety is a complex issue. I here want to examine a test case with multiple users, where one of the users is a municipality, which has installed a traffic safety system affecting the behaviour of road users. In this test case, control is transferred to the municipality but not to the road users. For them, the use plan does not transfer control; rather it communicates the fact that road users do not have control over certain actions.

In 2006, the municipality of Abcoude, The Netherlands, installed a “dosing lock” (doseersluis) on one of the exit roads. The goal was to reduce cut-through traffic and thereby increase traffic safety in the village centre. The dosing lock consisted of a narrowing of the road, a traffic light, warning signs that only one car could pass every time the light turned green, and a moveable obstacle that would block the road when the light turned red, to sink back in the road when the light turned green. The dosing lock was activated only during rush hours (Abcoude 2006).

The dosing lock was very successful in reducing cut-through traffic. It had an undesirable side effect, however: within half a year, over forty cars had crashed on the obstacle,Footnote 22 leading to leakage of oil and dangerous chemicals, many traffic jams and drivers bypassing them by driving over the bicycle path. The lock was disabled for some time while the municipality took extra measures to alert drivers to the obstacle, which apparently has lead to a decrease in the number of accidents.

Who would be responsible, according to the theoretical framework? First, it seems that the engineers have successfully transferred responsibility to the user, the municipality. The engineers have taken responsibility for building the dosing lock (I), and communicated its use plan to the municipality (II), thereby giving the municipality control over the dosing lock – and responsibility for it as well. This use plan could physically be executed with the dosing lock (III). The municipality was able to execute the use plan to realize its goals (IV). It also had access to the dosing lock (V).

Now, the municipality took on the role of “engineer” by having the artefact implemented in a road system used by other users. The municipality thus had a dual role as user (of the dosing lock) and engineer (of the public infrastructure, in this case, the road-with-dosing lock): I will regard their situation as comparable to an engineer who uses existing components to construct a new artefact, commonly called “off-the-shelf engineering”, or as Houkes et al. (2002) put it, “brochure engineering”.

The municipality did construct the road-with-dosing lock for the users, mainly car drivers, of that particular road. Their intention was also to transfer responsibility for the road-with-dosing lock to the road users.

Condition (I) was met in so far that the municipality had taken reponsibility for the road-with-dosing lock as part of its role responsibility for maintaining traffic safety. The communicated use plan for the road-with-dosing lock could physically be executed with it (III). The road users had the ability to use it, in this case, to pass the dosing lock safely (IV). Also, the road-with-dosing lock was accessible to them, indeed, the logical choice to reach some particular destinations (V).

The main complaint of the road users concerned condition (II): they felt that the municipality had failed to communicate certain aspects of the (rational) use plan to them. One particular problem seemed to be that the use plan for the traffic light of the dosing lock differed from that of regular traffic lights. Regular traffic lights leave regulative control to the user: if you are willing to risk the fine, you can choose to drive through a red light. In this case, however, road users mistakenly thought they still had regulative control and could either exercise guidance control over the action of stopping and waiting, or exercise guidance control over the action of driving on. The obstacle prevented users from exercising guidance control over the last action. As the road users claimed they didn’t realize that, they held the municipality responsible for the results of their actions, in particular the damage done to their cars.Footnote 23 The municipality didn’t agree, but they did close the lock down for some time in order to increase the salience of the obstacle by several measures.Footnote 24 In other words: they sought to better communicate a rational use plan for the road-with-dosing lock to the road users.

In this test case, the framework has shown that the fault lay not so much with the engineers who designed the dosing lock as such, as well as with the municipality who placed it in a context, designing the road-with-dosing lock, making it clear on which level the success of the transfer of responsibility was in question. In particular, the framework has given a possible reason for why the traffic light of the dosing lock wasn’t as effective as it should have been, namely that its use plan differed from that of “ordinary” traffic lights, indicating that special attention was needed for communication of the differences.

6 Conclusion

In this chapter I have argued that certain responsibilities of engineers for artefacts can be transferred to users. I have also shown how this transfer can take place. I have done this by first summarizing the relevant parts of the theory of responsibility and control by Fischer and Ravizza and the use plan theory by Houkes and Vermaas. I have then shown how both approaches could be combined to support my thesis. I have given an example of how the framework functions and demonstrated its practical merit in helping to indicate the responsible party in the Abcoude dosing lock case.

At the beginning of this chapter, I mentioned the two “Aristotelian conditions” which exempt an agent from responsibility: being ignorant and being forced. Fischer and Ravizza started out working on the second, but the use plan approach seems to have brought us to the first. After all, the communication of a use plan is a transfer of knowledge, relieving the agent of ignorance about an artefact. Is this shift strange? Not if we notice the overlap between the force and ignorance conditions: if an agent has no clue about how an artefact works, he is “forced” to submit to its behaviour just as much as he is “forced” to submit to the laws of nature. However, where knowledge about the laws of nature does not give an agent the ability to control them, knowledge about the workings of an artefact can give the agent control over that artefact, and thereby, responsibility for it.