Skip to main content

Design for Human-Automation and Human-Autonomous Systems

  • Chapter
  • First Online:
Springer Handbook of Automation

Part of the book series: Springer Handbooks ((SHB))

Abstract

Designers frequently look toward automation as a way to increase system efficiency and safety by reducing involvement. This approach can disappoint because the contribution of people often becomes more, not less, important as automation becomes more powerful and prevalent. More powerful automation demands greater attention to its design, supervisory responsibilities, system maintenance, software upgrades, and automation coordination. Developing automation without consideration of the human operator can lead to new and more catastrophic failures. For automation to fulfill its promise, designers must avoid a technology-centered approach that often yields strong but silent forms of automation, and instead adopt an approach that considers the joint operator-automation system that yields more collaborative, communicative forms of automation. Automation-related problems arise because introducing automation changes the type and extent of feedback that operators receive, as well as the nature and structure of tasks. Also, operators’ behavioral, cognitive, and emotional responses to these changes can leave the system vulnerable to failure. No single approach can address all of these challenges because automation is a heterogeneous technology. There are many types and forms of automation and each poses different design challenges. This chapter describes how different types of automation place different demands on operators. It also presents strategies that can help designers achieve promised benefits of automation. The chapter concludes with future challenges in automation design and human interaction with increasingly autonomous systems.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 309.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 399.00
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Grabowski, M.R., Hendrick, H.: How low can we go?: validation and verification of a decision support system for safe shipboard manning. IEEE Trans. Eng. Manag. 40(1), 41–53 (1993)

    Google Scholar 

  2. Nagel, D.C., Nagle, D.C.: Human error in aviation operations. In: Wiener, E.L., Nagle, D.C. (eds.) Human Factors in Aviation, pp. 263–303. Academic, New York (1988)

    Google Scholar 

  3. Singh, D.T., Singh, P.P.: Aiding DSS users in the use of complex OR models. Ann. Oper. Res. 72, 5–27 (1997)

    MATH  Google Scholar 

  4. Lutzhoft, M.H., Dekker, S.W.: On your watch: automation on the bridge. J. Navig. 55(1), 83–96 (2002)

    Google Scholar 

  5. NTSB: Grounding of the Panamanian passenger ship Royal Majesty on Rose and Crown shoal near Nantucket, MA, 10 June 1995 (NTSB/MAR-97/01). Washington, DC (1997)

    Google Scholar 

  6. Ransbotham, S., Khodabandeh, S., Kiron, D., Candelon, F., Chu, M., LaFountain, B.: Expanding AI’s Impact with Organizational Learning. Available: (https://sloanreview.mit.edu/projects/expanding-ais-impact-with-organizational-learning/)(2020). Accessed 18 Jan 2021

  7. Woods, D.D.: Decomposing automation: apparent simplicity, real complexity. In: Automation and Human Performance: Theory and Applications, pp. 3–17. Erlbaum, Mahwah (1996)

    Google Scholar 

  8. Fisher, D.L., Horrey, W.J., Lee, J.D., Regan, M.A.: Handbook of Human Factors for Automated, Connected, and Intelligent Vehicles. CRC Press, Boca Raton (2020)

    Google Scholar 

  9. O’Neil, K.: Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown, New York (2016)

    MATH  Google Scholar 

  10. Pearce, M., Mutlu, B., Shah, J., Radwin, R.: Optimizing Makespan and ergonomics in integrating collaborative robots into manufacturing processes. IEEE Trans. Autom. Sci. Eng. 15(4), 1772–1784 (2018). https://doi.org/10.1109/TASE.2018.2789820

    Article  Google Scholar 

  11. Anam, K., Al-Jumaily, A.A.: Active exoskeleton control systems: state of the art. Procedia Eng. 41, 988–994 (2012). https://doi.org/10.1016/j.proeng.2012.07.273

    Article  Google Scholar 

  12. McFarland, D.J., Wolpaw, J.R.: Brain-computer interface operation of robotic and prosthetic devices. Computer. 41(10), 52–56 (2008). https://doi.org/10.1109/MC.2008.409

    Article  Google Scholar 

  13. van Krevelen, D.W.F., Poelman, R.: A survey of augmented reality technologies, applications and limitations. Int. J. Virtual Real. 9(2), 1–20 (2010). https://doi.org/10.1155/2011/721827

    Article  Google Scholar 

  14. Sawyer, B.D., Miller, D.B., Canham, M., Karwowksi, W.: Human factors and ergonomics in design of A3: automation, autonomy, and artificial intelligence. In: Handbook of Human Factors and Ergonomic. Wiley, New York (2021)

    Google Scholar 

  15. Bergasa, L.M., Nuevo, J., Sotelo, M.A., Barea, R., Lopez, M.E.: Real-time system for monitoring driver vigilance. IEEE Trans. Intell. Transp. Syst. 7(1), 63–77 (2006)

    Google Scholar 

  16. Ghazizadeh, M., Lee, J.D.: Modeling driver acceptance: from feedback to monitoring and mentoring. In: Regan, M.A., Horberry, T., Stevens, A. (eds.) Driver Acceptance of New Technology: Theory, Measurement and Optimisation, pp. 51–72. CRC Press, Boca Raton (2013)

    Google Scholar 

  17. Diedenhofen, B., Musch, J.: PageFocus: using paradata to detect and prevent cheating on online achievement tests. Behav. Res. 49(4), 1444–1459 (2017). https://doi.org/10.3758/s13428-016-0800-7

    Article  Google Scholar 

  18. Woods, D.D.: The risks of autonomy: Doyles Catch. J. Cogn. Eng. Decis. Mak. 10(2), 131–133 (2016). https://doi.org/10.1177/1555343416653562

    Article  MathSciNet  Google Scholar 

  19. DSB: The Role of Autonomy in DoD Systems. Department of Defense, Defense Science Board (2012)

    Google Scholar 

  20. Chiou, E.K., Lee, E.K.: Trusting automation: Designing for responsivity and resilience. Hum. Factors, 00187208211009995 (2021).

    Google Scholar 

  21. Russell, S.: Human Compatible: AI and the Problem of Control. Penguin, New York (2019)

    Google Scholar 

  22. Wiener, N.: The Human Use of Human Beings: Cybernetics and Society. Eyre and Spottiswoode, London. Available: http://csi-india.org.in/document_library/Gopal_The%20Human%20Use%20of%20Human%20Beings14f0.pdf (1954). Accessed 31 Jan 2021

  23. McFadden, S., Vimalachandran, A., Blackmore, E.: Factors affecting performance on a target monitoring task employing an automatic tracker. Ergonomics. 47(3), 257–280 (2003)

    Google Scholar 

  24. Wickens, C.D., Kessel, C.: Failure detection in dynamic systems. In: Human Detection and Diagnosis of System Failures, pp. 155–169. Springer US, Boston (1981)

    Google Scholar 

  25. Zuboff, S.: In the Age of Smart Machines: the Future of Work, Technology and Power. Basic Books, New York (1988)

    Google Scholar 

  26. Endsley, M.R., Kiris, E.O.: The out-of-the-loop performance problem and level of control in automation. Hum. Factors. 37(2), 381–394 (1995). https://doi.org/10.1518/001872095779064555

    Article  Google Scholar 

  27. Bhana, H.: Trust but verify. AeroSafety World, 5(5), 13–14 (2010)

    Google Scholar 

  28. Billings, C.E.: Aviation Automation: The Search for a Human-Centered Approach. Erlbaum, Mahwah (1997)

    Google Scholar 

  29. Alsaid, A., Lee, J.D., Price, M.A.: Moving into the loop: an investigation of drivers’ steering behavior in highly automated vehicles. Hum. Factors. 62(4), 671–683 (2019)

    Google Scholar 

  30. Merat, N., Seppelt, B., Louw, T., Engstrom, J., Lee, J.D., Johansson, E., Green, C.A., Katazaki, S., Monk, C., Itoh, M., McGehee, D., Sunda, T., Kiyozumi, U., Victor, T., Schieben, A., Andreas, K.: The ‘Out-of-the-Loop’ concept in automated driving: proposed definition, measures and implications. Cogn. Tech. Work. 21(1), 87–98 (2019). https://doi.org/10.1007/s10111-018-0525-8

    Article  Google Scholar 

  31. Sarter, N.B., Woods, D.D., Billings, C.E.: Automation surprises. In: Salvendy, G. (ed.) Handbook of Human Factors and Ergonomics, 2nd edn, pp. 1926–1943. Wiley, New York (1997)

    Google Scholar 

  32. NTSB: Marine accident report – grounding of the U.S. Tankship Exxon Valdez on Bligh Reef, Prince William Sound, Valdez, 24 Mar 1989. NTSB, Washington, DC (1997)

    Google Scholar 

  33. Lee, J.D., Sanquist, T.F.: Augmenting the operator function model with cognitive operations: assessing the cognitive demands of technological innovation in ship navigation. IEEE Trans. Syst. Man Cybern. Syst. Hum. 30(3), 273–285 (2000)

    Google Scholar 

  34. Wiener, E.L.: Human Factors of Advanced Technology (‘Glass Cockpit’) Transport Aircraft. NASA Ames Research Center, NASA Contractor Report 177528 (1989)

    Google Scholar 

  35. Bainbridge, L.: Ironies of automation. Automatica. 19(6), 775–779 (1983). https://doi.org/10.1016/0005-1098(83)90046-8

    Article  Google Scholar 

  36. Cook, R.I., Woods, D.D., McColligan, E., Howie, M.B.: Cognitive consequences of ‘clumsy’ automation on high workload, high consequence human performance. In: SOAR 90, Space Operations, Applications and Research Symposium, NASA Johnson Space Center (1990)

    Google Scholar 

  37. Johannesen, L., Woods, D.D., Potter, S.S., Holloway, M.: Human Interaction with Intelligent Systems: an Overview and Bibliography. The Ohio State University, Columbus (1991)

    Google Scholar 

  38. Lee, J.D., Morgan, J.: Identifying clumsy automation at the macro level: development of a tool to estimate ship staffing requirements. In: Proceedings of the Human Factors and Ergonomics Society 38th Annual Meeting, Santa Monica, vol. 2, pp. 878–882 (1994)

    Google Scholar 

  39. Smith, P.J., Layton, C., McCoy, C.E.: Brittleness in the design of cooperative problem-solving systems: the effects on user performance. IEEE Trans. Syst. Man Cybern. 27(3), 360–371 (1997)

    Google Scholar 

  40. Hutchins, E.L.: Cognition in the Wild. The MIT Press, Cambridge, MA (1995)

    Google Scholar 

  41. Gao, J., Lee, J.D., Zhang, Y.: A dynamic model of interaction between reliance on automation and cooperation in multi-operator multi-automation situations. Int. J. Ind. Ergon. 36(5), 511–526 (2006)

    Google Scholar 

  42. Casner, S.M., Geven, R.W., Recker, M.P., Schooler, J.W.: The retention of manual flying skills in the automated cockpit. Hum. Factors. 56(8), 1506–1516 (2014). https://doi.org/10.1177/0018720814535628

    Article  Google Scholar 

  43. Kirwan, B.: The role of the controller in the accelerating industry of air traffic management. Saf. Sci. 37(2–3), 151–185 (2001)

    Google Scholar 

  44. Parasuraman, R., Molloy, R., Singh, I.L.: Performance consequences of automation-induced ‘complacency’. Int. J. Aviat. Psychol. 3(1), 1–23 (1993). https://doi.org/10.1207/s15327108ijap0301_1

    Article  Google Scholar 

  45. Parasuraman, R., Mouloua, M., Molloy, R.: Monitoring automation failures in human-machine systems. In: Mouloua, M., Parasuraman, R. (eds.) Human Performance in Automated Systems: Current Research and Trends, pp. 45–49. Lawrence Erlbaum Associates, Hillsdale (1994)

    Google Scholar 

  46. Meyer, J.: Effects of warning validity and proximity on responses to warnings. Hum. Factors. 43(4), 563–572 (2001)

    Google Scholar 

  47. Parasuraman, R., Riley, V.A.: Humans and automation: use, misuse, disuse, abuse. Hum. Factors. 39(2), 230–253 (1997)

    Google Scholar 

  48. Dzindolet, M.T., Pierce, L.G., Beck, H.P., Dawe, L.A., Anderson, B.W.: Predicting misuse and disuse of combat identification systems. Mil. Psychol. 13(3), 147–164 (2001)

    Google Scholar 

  49. Lee, J.D., Moray, N.: Trust, self-confidence, and operators’ adaptation to automation. Int. J. Hum.-Comput. Stud. 40(1), 153–184 (1994). https://doi.org/10.1006/ijhc.1994.1007

    Article  Google Scholar 

  50. Lee, J.D., See, K.A.: Trust in automation: designing for appropriate reliance. Hum. Factors. 46(1), 50–80 (2004)

    Google Scholar 

  51. Reeves, B., Nass, C.: The Media Equation: how People Treat Computers, Television, and New Media like Real People and Places. Cambridge University Press, New York (1996)

    Google Scholar 

  52. Sheridan, T.B., Ferrell, W.R.: Man-Machine Systems: Information, Control, and Decision Models of Human Performance. MIT Press, Cambridge, MA (1974)

    Google Scholar 

  53. Sheridan, T.B., Hennessy, R.T.: Research and Modeling of Supervisory Control Behavior. National Academy Press, Washington, DC (1984)

    Google Scholar 

  54. Deutsch, M.: The effect of motivational orientation upon trust and suspicion. Hum. Relat. 13, 123–139 (1960)

    Google Scholar 

  55. Deutsch, M.: Trust and suspicion. Confl. Resolut. III(4), 265–279 (1969)

    Google Scholar 

  56. Rempel, J.K., Holmes, J.G., Zanna, M.P.: Trust in close relationships. J. Pers. Soc. Psychol. 49(1), 95–112 (1985)

    Google Scholar 

  57. Ross, W.H., LaCroix, J.: Multiple meanings of trust in negotiation theory and research: a literature review and integrative model. Int. J. Confl. Manag. 7(4), 314–360 (1996)

    Google Scholar 

  58. Rotter, J.B.: A new scale for the measurement of interpersonal trust. J. Pers. 35(4), 651–665 (1967)

    Google Scholar 

  59. Lee, J.D., Moray, N.: Trust, control strategies and allocation of function in human-machine systems. Ergonomics. 35(10), 1243–1270 (1992). https://doi.org/10.1080/00140139208967392

    Article  Google Scholar 

  60. Lewandowsky, S., Mundy, M., Tan, G.P.A.: The dynamics of trust: comparing humans to automation. J. Exp. Psychol. Appl. 6(2), 104–123 (2000)

    Google Scholar 

  61. Muir, B.M., Moray, N.: Trust in automation. Part II. Experimental studies of trust and human intervention in a process control simulation. Ergonomics. 39(3), 429–460 (1996)

    Google Scholar 

  62. de Vries, P., Midden, C., Bouwhuis, D.: The effects of errors on system trust, self-confidence, and the allocation of control in route planning. Int. J. Hum.-Comput. Stud. 58(6), 719–735 (2003). https://doi.org/10.1016/S1071-5819(03)00039-9

    Article  Google Scholar 

  63. Gefen, D., Karahanna, E., Straub, D.W.: Trust and TAM in online shopping: an integrated model. MIS Q. 27(1), 51–90 (2003)

    Google Scholar 

  64. Kim, J., Moon, J.Y.: Designing towards emotional usability in customer interfaces – trustworthiness of cyber-banking system interfaces. Interact. Comput. 10(1), 1–29 (1998)

    Google Scholar 

  65. Wang, Y.D., Emurian, H.H.: An overview of online trust: concepts, elements, and implications. Comput. Hum. Behav. 21(1), 105–125 (2005)

    Google Scholar 

  66. Sheridan, T.B.: Supervisory control. In: Salvendy, G. (ed.) Handbook of Human Factors, pp. 1243–1268. Wiley, New York (1987)

    Google Scholar 

  67. Sheridan, T.B.: Telerobotics, Automation, and Human Supervisory Control. The MIT Press, Cambridge, MA (1992)

    Google Scholar 

  68. Eprath, A.R., Curry, R.E., Ephrath, A.R., Curry, R.E.: Detection of pilots of system failures during instrument landings. IEEE Trans. Syst. Man Cybern. 7(12), 841–848 (1977)

    Google Scholar 

  69. Gibson, J.J.: Observations on active touch. Psychol. Rev. 69(6), 477–491 (1962)

    Google Scholar 

  70. Jagacinski, R.J., Flach, J.M.: Control Theory for Humans: Quantitative Approaches to Modeling Performance. Lawrence Erlbaum Associates, Mahwah (2003)

    Google Scholar 

  71. Bainbridge, L.: Mathematical equations of processing routines. In: Rasmussen, J., Rouse, W.B. (eds.) Human Detection and Diagnosis of System Failures, pp. 259–286. Plenum Press, New York (1981)

    Google Scholar 

  72. Moray, N.: Human factors in process control. In: Salvendy, G. (ed.) The Handbook of Human Factors and Ergonomics, 2nd edn. Wiley, New York (1997)

    Google Scholar 

  73. Evans, L.: Traffic Safety. Science Serving Society, Bloomfield Hills/Michigan (2004)

    Google Scholar 

  74. Wilde, G.J.S.: Risk homeostasis theory and traffic accidents: propositions, deductions and discussion of dissension in recent reactions. Ergonomics. 31(4), 441–468 (1988)

    Google Scholar 

  75. Wilde, G.J.S.: Accident countermeasures and behavioral compensation: the position of risk homeostasis theory. J. Occup. Accid. 10(4), 267–292 (1989)

    Google Scholar 

  76. Perrow, C.: Normal Accidents. Basic Books, New York (1984)

    Google Scholar 

  77. Tenner, E.: Why Things Bite Back: Technology and the Revenge of Unanticipated Consequences. Knopf, New York (1996)

    Google Scholar 

  78. Sagberg, F., Fosser, S., Saetermo, I.a.F., Sktermo, F.: An investigation of behavioural adaptation to airbags and antilock brakes among taxi drivers. Accid. Anal. Prev. 29(3), 293–302 (1997)

    Google Scholar 

  79. Stanton, N.A., Pinto, M.: Behavioural compensation by drivers of a simulator when using a vision enhancement system. Ergonomics. 43(9), 1359–1370 (2000)

    Google Scholar 

  80. Mosier, K.L., Skitka, L.J., Heers, S., Burdick, M.: Automation bias: decision making and performance in high-tech cockpits. Int. J. Aviat. Psychol. 8(1), 47–63 (1998)

    Google Scholar 

  81. Skitka, L.J., Mosier, K.L., Burdick, M.D.: Accountability and automation bias. Int. J. Hum.-Comput. Stud. 52(4), 701–717 (2000)

    Google Scholar 

  82. Skitka, L.J., Mosier, K.L., Burdick, M.: Does automation bias decision-making? Int. J. Hum.-Comput. Stud. 51(5), 991–1006 (1999)

    Google Scholar 

  83. Sheridan, T.B.: Humans and Automation. Wiley, New York (2002)

    Google Scholar 

  84. Vicente, K.J.: Cognitive Work Analysis: toward Safe, Productive, and Healthy Computer-Based Work. Lawrence Erlbaum Associates, Mahwah/London (1999)

    Google Scholar 

  85. Lee, J.D.: Human factors and ergonomics in automation design. In: Salvendy, G. (ed.) Handbook of Human Factors and Ergonomics, pp. 1570–1596. Wiley, Hoboken (2006)

    Google Scholar 

  86. Lee, J.D., Sanquist, T.F.: Maritime automation. In: Parasuraman, R., Mouloua, M. (eds.) Automation and Human Performance: Theory and Applications, pp. 365–384. Lawrence Erlbaum Associates, Mahwah (1996)

    Google Scholar 

  87. Parasuraman, R., Sheridan, T.B., Wickens, C.D.: A model for types and levels of human interaction with automation. IEEE Trans. Syst. Man Cybern. Syst. Hum. 30(3), 286–297 (2000)

    Google Scholar 

  88. Dzindolet, M.T., Pierce, L.G., Beck, H.P., Dawe, L.A.: The perceived utility of human and automated aids in a visual detection task. Hum. Factors. 44(1), 79–94 (2002)

    Google Scholar 

  89. Yeh, M., Wickens, C.D.: Display signaling in augmented reality: effects of cue reliability and image realism on attention allocation and trust calibration. Hum. Factors. 43, 355–365 (2001)

    Google Scholar 

  90. Bliss, J.P.: Alarm reaction patterns by pilots as a function of reaction modality. Int. J. Aviat. Psychol. 7(1), 1–14 (1997)

    Google Scholar 

  91. Bliss, J.P., Acton, S.A.: Alarm mistrust in automobiles: how collision alarm reliability affects driving. Appl. Ergon. 34(6), 499–509 (2003). https://doi.org/10.1016/j.apergo.2003.07.003

    Article  Google Scholar 

  92. Guerlain, S.A., Smith, P., Obradovich, J., Rudmann, S., Strohm, P., Smith, J., Svirbely, J.: Dealing with brittleness in the design of expert systems for immunohematology. Immunohematology. 12(3), 101–107 (1996)

    Google Scholar 

  93. Sarter, N.B., Woods, D.D.: Decomposing automation: autonomy, authority, observability and perceived animacy. In: Mouloua, M., Parasuraman, R. (eds.) Human Performance in Automated Systems: Current Research and Trends, pp. 22–27. Lawrence Erlbaum Associates, Hillsdale (1994)

    Google Scholar 

  94. Olson, W.A., Sarter, N.B.: Automation management strategies: pilot preferences and operational experiences. Int. J. Aviat. Psychol. 10(4), 327–341 (2000). https://doi.org/10.1207/S15327108IJAP1004_2

    Article  Google Scholar 

  95. Sarter, N.B., Woods, D.D.: Team play with a powerful and independent agent: operational experiences and automation surprises on the Airbus A-320. Hum. Factors. 39(3), 390–402 (1997)

    Google Scholar 

  96. Sarter, N.B., Woods, D.D.: Team play with a powerful and independent agent: a full-mission simulation study. Hum. Factors. 42(3), 390–402 (2000)

    Google Scholar 

  97. Lewis, M.: Designing for human-agent interaction. AI Mag. 19(2), 67–78 (1998)

    Google Scholar 

  98. Jones, P.M., Jacobs, J.L.: Cooperative problem solving in human-machine systems: theory, models, and intelligent associate systems. IEEE Trans. Syst. Man Cybern. Part C Appl. Rev. 30(4), 397–407 (2000)

    Google Scholar 

  99. Bocionek, S.R.: Agent systems that negotiate and learn. Int. J. Hum.-Comput. Stud. 42(3), 265–288 (1995)

    Google Scholar 

  100. Sarter, N.B.: The need for multisensory interfaces in support of effective attention allocation in highly dynamic event-driven domains: the case of cockpit automation. Int. J. Aviat. Psychol. 10(3), 231–245 (2000)

    Google Scholar 

  101. Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–38 (2019). https://doi.org/10.1016/j.artint.2018.07.007

    Article  MathSciNet  MATH  Google Scholar 

  102. Rudin, C.: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 1(5), 206–215 (2019). https://doi.org/10.1038/s42256-019-0048-x

    Article  Google Scholar 

  103. Inagaki, T.: Automation and the cost of authority. Int. J. Ind. Ergon. 31(3), 169–174 (2003)

    Google Scholar 

  104. Moray, N., Inagaki, T., Itoh, M.: Adaptive automation, trust, and self-confidence in fault management of time-critical tasks. J. Exp. Psychol. Appl. 6(1), 44–58 (2000)

    Google Scholar 

  105. Endsley, M.R.: Autonomous driving systems: a preliminary naturalistic study of the Tesla Model S. J. Cogn. Eng. Decis. Mak. 11(3), 225–238 (2017). https://doi.org/10.1177/1555343417695197

    Article  Google Scholar 

  106. NTSB: Collision Between a Car Operating with Automated Vehicle Control Systems and a Tractor-Semitrailer Truck Near Williston, Florida. Accident Report NYST/HAR-17/-2 (2017)

    Google Scholar 

  107. Liang, C.Y., Peng, H.: Optimal adaptive cruise control with guaranteed string stability. Veh. Syst. Dyn. 32(4–5), 313–330 (1999)

    Google Scholar 

  108. Liang, C.Y., Peng, H.: String stability analysis of adaptive cruise controlled vehicles. JSME Int. J. Ser. C-Mech. Syst. Mach. Elem. Manuf. 43(3), 671–677 (2000)

    Google Scholar 

  109. Lee, J.D., Gao, J.: Trust, information technology, and cooperation in supply chains. Supply Chain Forum: Int. J. 6(2), 82–89 (2005)

    Google Scholar 

  110. Gao, J., Lee, J.D.: Information sharing, trust, and reliance – a dynamic model of multi-operator multi-automation interaction. In: Proceedings of the 5th Conference on Human Performance, Situation Awareness and Automation Technology, Mahwah, vol. 2, pp. 34–39 (2004)

    Google Scholar 

  111. Hollan, J., Hutchins, E.L., Kirsh, D.: Distributed cognition: toward a new foundation for human-computer interaction research. ACM Trans. Comput.-Hum. Interact. 7(2), 174–196 (2000)

    Google Scholar 

  112. Flach, J.M.: The ecology of human-machine systems I: introduction. Ecol. Psychol. 2(3), 191–205 (1990)

    Google Scholar 

  113. Kantowitz, B.H., Sorkin, R.D.: Allocation of functions. In: Salvendy, G. (ed.) Handbook of Human Factors, pp. 355–369. Wiley, New York (1987)

    Google Scholar 

  114. Kirlik, A., Miller, R.A., Jagacinski, R.J.: Supervisory control in a dynamic and uncertain environment: a process model of skilled human-environment interaction. IEEE Trans. Syst. Man Cybern. 23(4), 929–952 (1993)

    Google Scholar 

  115. Vicente, K.J., Rasmussen, J.: The Ecology of Human-Machine Systems II: Mediating ‘Direct Perception’ in Complex Work Domains. Risø National Laboratory and Technical University of Denmark (1990)

    Google Scholar 

  116. Hogarth, R.M., Lejarraga, T., Soyer, E.: The two settings of kind and wicked learning environments. Curr. Dir. Psychol. Sci. 24(5), 379–385 (2015). https://doi.org/10.1177/0963721415591878

    Article  Google Scholar 

  117. Tomsett, R., Preece, A., Braines, D., Cerutti, F., Chakraborty, S.: Rapid trust calibration through interpretable and uncertainty-aware AI. Patterns. 1(4), 100049 (2020). https://doi.org/10.1016/j.patter.2020.100049

    Article  Google Scholar 

  118. Lee, J.D., Wickens, C.D., Liu, Y., Boyle, L.N.: Designing for People: An Introduction to Human Factors Engineering. CreateSpace, Charleston (2017)

    Google Scholar 

  119. Woods, D.D., Patterson, E.S., Corban, J., Watts, J.: Bridging the gap between user-centered intentions and actual design practice. In: Proceedings of the Human Factors and Ergonomics Society 40th Annual Meeting, Santa Monica, vol. 2, pp. 967–971 (1996)

    Google Scholar 

  120. Bosma, H., Marmot, M.G., Hemingway, H., Nicholson, A.C., Brunner, E., Stansfeld, S.A.: Low job control and risk of coronary heart disease in Whitehall II (prospective cohort) study. Br. Med. J. 314(7080), 558–565 (1997)

    Google Scholar 

  121. Bosma, H., Peter, R., Siegrist, J., Marmot, M.G.: Two alternative job stress models and the risk of coronary heart disease. Am. J. Public Health. 88(1), 68–74 (1998)

    Google Scholar 

  122. Morgeson, F.P., Campion, M.A., Bruning, P.F.: Job and team design. In: Salvendy, G. (ed.) Handbook of Human Factors and Ergonomics, 4th edn, pp. 441–474. Wiley, New York (2012). https://doi.org/10.1002/9781118131350.ch15

    Chapter  Google Scholar 

  123. Hackman, J.R.R., Oldham, G.R.: Motivation through the design of work: test of a theory. Organ. Behav. Hum. Perform. 16(2), 250–279 (1976). https://doi.org/10.1016/0030-5073(76)90016-7

    Article  Google Scholar 

  124. Herzberg, F.I.: Work and the Nature of Man. World Press, Oxford, UK (1966)

    Google Scholar 

  125. Smith, M.J., Sainfort, P.C.: A balance theory of job design for stress reduction. Int. J. Ind. Ergon. 4(1), 67–79 (1989). https://doi.org/10.1016/0169-8141(89)90051-6

    Article  Google Scholar 

  126. Oldman, G.R., Hackman, J.R.: Not what it was and not what it will be: the future of job design research. J. Organ. Behav. 31, 463–479 (2010)

    Google Scholar 

  127. Klein, G.A., Woods, D.D., Bradshaw, J.M., Hoffman, R.R., Feltovich, P.J.: Ten challenges for making automation a ‘Team Player’ in joint human-agent activity. IEEE Intell. Syst. 19(6), 91–95 (2004)

    Google Scholar 

  128. Kaber, D.B.: Issues in human-automation interaction modeling: presumptive aspects of frameworks of types and levels of automation. J. Cogn. Eng. Decis. Mak. 12(1), 7–24 (2018)

    Google Scholar 

  129. Sharit, J.: Perspectives on computer aiding in cognitive work domains: toward predictions of effectiveness and use. Ergonomics. 46(1–3), 126–140 (2003)

    Google Scholar 

  130. Dearden, A., Harrison, M., Wright, P.: Allocation of function: scenarios, context and the economics of effort. Int. J. Hum.-Comput. Stud. 52(2), 289–318 (2000)

    Google Scholar 

  131. Dekker, S.W., Woods, D.D.: Maba-Maba or abracadabra? Progress on human-automation coordination. Cogn. Tech. Work. 4(4), 1–13 (2002)

    Google Scholar 

  132. Sheridan, T.B., Studies, H.: Function allocation: algorithm, alchemy or apostasy? Int. J. Hum.-Comput. Stud. 52(2), 203–216 (2000). https://doi.org/10.1006/ijhc.1999.0285

    Article  Google Scholar 

  133. Hollnagel, E., Bye, A.: Principles for modelling function allocation. Int. J. Hum.-Comput. Stud. 52(2), 253–265 (2000)

    Google Scholar 

  134. Kirlik, A.: Modeling strategic behavior in human-automation interaction: why an “aid” can (and should) go unused. Hum. Factors. 35(2), 221–242 (1993)

    Google Scholar 

  135. Anderson, J.R., Libiere, C.: Atomic Components of Thought. Lawrence Erlbaum, Hillsdale (1998)

    Google Scholar 

  136. Byrne, M.D., Kirlik, A.: Using computational cognitive modeling to diagnose possible sources of aviation error. Int. J. Aviat. Psychol. 15(2), 135–155 (2005)

    Google Scholar 

  137. Degani, A., Kirlik, A.: Modes in human-automation interaction: Initial observations about a modeling approach. In: IEEE Transactions on System, Man, and Cybernetics, Vancouver, vol. 4, p. to appear (1995)

    Google Scholar 

  138. Degani, A., Heymann, M.: Formal verification of human-automation interaction. Hum. Factors. 44(1), 28–43 (2002)

    MATH  Google Scholar 

  139. Seppelt, B.D., Lee, J.D.: Modeling driver response to imperfect vehicle control automation. Procedia Manuf. 3, 2621–2628 (2015)

    Google Scholar 

  140. Norman, D.A.: The ‘problem’ with automation: inappropriate feedback and interaction, not ‘over-automation’. Philosophical Transactions of the Royal Society London, Series B, Biological Sciences. 327, 585–593 (1990). https://doi.org/10.1098/rstb.1990.0101. Great Britain, Human Factors in High Risk Situations, Human Factors in High Risk Situations

    Article  Google Scholar 

  141. Entin, E.B.E.E., Serfaty, D.: Optimizing aided target-recognition performance. In: Proceedings of the Human Factors and Ergonomics Society, Santa Monica, vol. 1, pp. 233–237 (1996)

    Google Scholar 

  142. Sklar, A.E., Sarter, N.B.: Good vibrations: tactile feedback in support of attention allocation and human-automation coordination in event-driven domains. Hum. Factors. 41(4), 543–552 (1999)

    Google Scholar 

  143. Nikolic, M.I., Sarter, N.B.: Peripheral visual feedback: a powerful means of supporting effective attention allocation in event-driven, data-rich environments. Hum. Factors. 43(1), 30–38 (2001)

    Google Scholar 

  144. Seppelt, B.D., Lee, J.D.: Making adaptive cruise control (ACC) limits visible. Int. J. Hum.-Comput. Stud. 65(3), 192–205 (2007). https://doi.org/10.1016/j.ijhcs.2006.10.001

    Article  Google Scholar 

  145. Flach, J.M.: Ready, fire, aim: a ‘meaning-processing’ approach to display design. In: Gopher, D., Koriat, A. (eds.) Attention and Performance XVII: Cognitive Regulation of Performance: Interaction of Theory and Application, vol. 17, pp. 197–221. MIT Press, Cambridge, MA (1999)

    Google Scholar 

  146. Guerlain, S.A., Jamieson, G.A., Bullemer, P., Blair, R.: The MPC elucidator: a case study in the design for human- automation interaction. IEEE Trans. Syst. Man Cybern. Syst. Hum. 32(1), 25–40 (2002)

    Google Scholar 

  147. Briggs, P., Burford, B., Dracup, C.: Modeling self-confidence in users of a computer-based system showing unrepresentative design. Int. J. Hum.-Comput. Stud. 49(5), 717–742 (1998)

    Google Scholar 

  148. Fogg, B.J., Tseng, H.: The elements of computer credibility. In Proceedings of the SIGCHI conference on Human Factors in Computing Systems, 80–87 (1999)

    Google Scholar 

  149. Fogg, B.J., Marshall, J., Laraki, O., Ale, O., Varma, C., Fang, N., Jyoti, P., Rangnekar, A., Shon, J., Swani, R., Treinen, M.: What makes web sites credible? A report on a large quantitative study. In: CHI Conference on Human Factors in Computing Systems, Seattle, pp. 61–68 (2001)

    Google Scholar 

  150. Fogg, B.J., Marshall, J., Kameda, T., Solomon, J., Rangnekar, A., Boyd, J., Brown, B.: Web credibility research: a method for online experiments and early study results. In: CHI Conference on Human Factors in Computing Systems, pp. 293–294 (2001)

    Google Scholar 

  151. Abbink, D.A., Mulder, M., Boer, E.R.: Haptic shared control: smoothly shifting control authority? Cogn. Tech. Work. 14(1), 19–28 (2011). https://doi.org/10.1007/s10111-011-0192-5

    Article  Google Scholar 

  152. Mars, F., Deroo, M., Hoc, J.M.: Analysis of human-machine cooperation when driving with different degrees of haptic shared control. IEEE Trans. Haptics. 7(3), 324–333 (2014). https://doi.org/10.1109/TOH.2013.2295095

    Article  Google Scholar 

  153. Riley, V.A.: A new language for pilot interfaces. Ergon. Des. 9(2), 21–27 (2001)

    Google Scholar 

  154. Goodrich, M.A., Boer, E.R.: Model-based human-centered task automation: a case study in ACC system design. IEEE Trans. Syst. Man Cybern. Syst. Hum. 33(3), 325–336 (2003)

    Google Scholar 

  155. Friston, K.: Hierarchical models in the brain. PLoS Comput. Biol. 4(11), e1000211 (2008). https://doi.org/10.1371/journal.pcbi.1000211

    Article  MathSciNet  Google Scholar 

  156. Kurby, C.A., Zacks, J.M.: Segmentation in the perception and memory of events. Trends Cogn. Sci. 12(2), 72–79 (2008). https://doi.org/10.1016/j.tics.2007.11.004

    Article  Google Scholar 

  157. Simon, H.A.: Sciences of the Artificial. MIT Press, Cambridge, MA (1970)

    Google Scholar 

  158. Miller, C.A., Parasuraman, R.: Designing for flexible interaction between humans and automation: delegation interfaces for supervisory control. Hum. Factors. 49(1), 57–75 (2007). https://doi.org/10.1518/001872007779598037

    Article  Google Scholar 

  159. Miller, C.A.: Definitions and dimensions of etiquette. In: Miller, C. (ed.) Etiquette for Human-Computer Work: Technical Report FS-02-02, pp. 1–7. American Association for Artificial Intelligence, Menlo Park (2002)

    Google Scholar 

  160. Nass, C., Lee, K.N.M.: Does computer-synthesized speech manifest personality? Experimental tests of recognition, similarity-attraction, and consistency-attraction. J. Exp. Psychol. Appl. 7(3), 171–181 (2001). https://doi.org/10.1037//1076-898X.7.3.171

    Article  Google Scholar 

  161. Chiou, E.K., Lee, J.D.: Cooperation in human-agent systems to support resilience: a microworld experiment. Hum. Factors. 58(6), 846–863 (2016)

    MathSciNet  Google Scholar 

  162. de Visser, E., Peeters, M.M., Jung, M., Kohn, S., Shaw, T., Richard, P., Neerincx, M.: Towards a theory of longitudinal trust calibration in human–robot teams. Int. J. Soc. Robot. 12 (2020). https://doi.org/10.1007/s12369-019-00596-x

  163. Vicente, K.J.: Coherence- and correspondence-driven work domains: implications for systems design. Behav. Inform. Technol. 9, 493–502 (1990)

    Google Scholar 

  164. Brooks, R.A., Maes, P., Mataric, M.J., More, G.: Lunar base construction robots. In: Proceedings of the 1990 International Workshop on Intelligent Robots and Systems, pp. 389–392 (1990)

    Google Scholar 

  165. Johnson, P.J., Bay, J.S.: Distributed control of simulated autonomous mobile robot collectives in payload transportation. Auton. Robot. 2(1), 43–63 (1995)

    Google Scholar 

  166. Beni, G., Wang, J.: Swarm intelligence in cellular robotic systems. In: Dario, P., Sansini, G., Aebischer, P. (eds.) Robots and Biological Systems: Towards a New Bionics. Springer, Berlin (1993)

    Google Scholar 

  167. Flynn, A.M.: A Robot Being. In: Dario, P., Sansini, G., Aebischer, P. (eds.) Robots and Biological Systems: Towards a New Bionics. Springer, Berlin (1993)

    Google Scholar 

  168. Fukuda, T., Funato, D., Sekiyama, K., Arai, F.: Evaluation on flexibility of swarm intelligent system. In: Proceedings of the 1998 IEEE International Conference on Robotics and Automation, pp. 3210–3215 (1998)

    Google Scholar 

  169. Min, T.W., Yin, H.K.: A decentralized approach for cooperative sweeping by multiple mobile robots. In: Proceedings of the 1998 IEEE/RSJ International Conference on Intelligent Robots and Systems (1998)

    Google Scholar 

  170. Sugihara, K., Suzuki, I.: Distributed motion coordination of multiple mobile robots. In: 5th IEEE International Symposium on Intelligent Control, pp. 138–143 (1990)

    Google Scholar 

  171. Patterson, E.S.: A simulation study of computer-supported inferential analysis under data overload. In: Proceedings of the Human Factors and Ergonomics 43rd Annual Meeting, vol. 1, pp. 363–368 (1999)

    Google Scholar 

  172. Pirolli, P., Card, S.K.: Information foraging. Psychol. Rev. 106(4), 643–675 (1999)

    Google Scholar 

  173. Murray, J., Liu, Y.: Hortatory operations in highway traffic management. IEEE Trans. Syst. Man Cybern. Syst. Hum. 27(3), 340–350 (1997)

    Google Scholar 

  174. Stickland, T.R., Britton, N.F., Franks, N.R.: Complex trails and simple algorithms in ant foraging. Proc. R. Soc. Lond. Ser. B Biol. Sci. 260(1357), 53–58 (1995)

    Google Scholar 

  175. Resnick, M.: Turtles, Termites, and Traffic Jams: Explorations in Massively Parallel Microworlds. The MIT Press, Cambridge, MA (1991)

    Google Scholar 

  176. Lee, J.D.: Emerging challenges in cognitive ergonomics: managing swarms of self-organizing agent-based automation. Theor. Issues Ergon. Sci. 2(3), 238–250 (2001)

    Google Scholar 

  177. Schelling, T.C.: Micro Motives and Macro Behavior. Norton, New York (1978)

    Google Scholar 

  178. Dyer, J.H., Singh, H.: The relational view: cooperative strategy and sources of interorganizational competitive advantage. Acad. Manag. Rev. 23(4), 660–679 (1998)

    Google Scholar 

  179. Sterman, J.: Modeling managerial behavior misperceptions of feedback in a dynamic decision making experiment. In: Management Science. [Online]. Available: http://mansci.journal.informs.org/content/35/3/321.short (1989). Accessed 30 Mar 2013

  180. Lee, H.L., Whang, S.J.: Information sharing in a supply chain. Int. J. Technol. Manag. 20(3–4), 373–387 (2000)

    Google Scholar 

  181. Zhao, X.D., Xie, J.X.: Forecasting errors and the value of information sharing in a supply chain. Int. J. Prod. Res. 40(2), 311–335 (2002)

    MATH  Google Scholar 

  182. Akkermans, H., van Helden, K.: Vicious and virtuous cycles in ERP implementation: a case study of interrelations between critical success factors. Eur. J. Inf. Syst. 11(1), 35–46 (2002)

    Google Scholar 

  183. Handfield, R.B., Bechtel, C.: The role of trust and relationship structure in improving supply chain responsiveness. Ind. Mark. Manag. 31(4), 367–382 (2002)

    Google Scholar 

  184. Busemeyer, J.R., Diederich, A.: Survey of decision field theory. Math. Soc. Sci. 43, 345–370 (2002)

    MathSciNet  MATH  Google Scholar 

  185. Lee, J.D., Gao, J.: Extending the decision field theory to model operators’ reliance on automation in supervisory control situations. IEEE Trans. Syst. Man Cybern. 36(5), 943–959 (2006). https://doi.org/10.1109/TSMCA.2005.855783

    Article  Google Scholar 

  186. Busemeyer, J.R., Townsend, J.T.: Decision field theory: a dynamic-cognitive approach to decision making in an uncertain environment. Psychol. Rev. 100(3), 432–459 (1993)

    Google Scholar 

  187. Zhou, T.S., Lu, J.H., Chen, L.N., Jing, Z.J., Tang, Y.: On the optimal solutions for power flow equations. Int. J. Electr. Power Energy Syst. 25(7), 533–541 (2003)

    Google Scholar 

  188. Mulkerin, T.: Free Flight is in the future – large-scale controller pilot data link communications emulation testbed. IEEE Aerosp. Electron. Syst. Mag. 18(9), 23–27 (2003)

    Google Scholar 

  189. Olson, W.A., Sarter, N.B.: Management by consent in human-machine systems: when and why it breaks down. Hum. Factors. 43(2), 255–266 (2001)

    Google Scholar 

  190. Chiou, E.K., Lee, J.D.: Trusting automation: designing for responsivity and resilience. Hum. Factors, 001872082110099 (2021). https://doi.org/10.1177/00187208211009995

  191. Duffy, V.G., Landry, S.J., Lee, J.D., Stanton, N. (Eds.).: Human-automation interaction: transportation. Springer ACES Series, Vol. 11, (2023)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to John D. Lee or Bobbie D. Seppelt .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Lee, J.D., Seppelt, B.D. (2023). Design for Human-Automation and Human-Autonomous Systems. In: Nof, S.Y. (eds) Springer Handbook of Automation. Springer Handbooks. Springer, Cham. https://doi.org/10.1007/978-3-030-96729-1_19

Download citation

Publish with us

Policies and ethics