Break
Design Sprint 1
AI Ethics Standards and Guidelines
With Dagmar Monett
Trust in automation, defined as “the attitude that an agent will achieve an individual’s goals in a situation characterized by uncertainty and vulnerability” [1] is a psychological core construct in the interaction between humans and machines, and has gained additional momentum due to the recent progress in artificial intelligence (AI). The provision of trustworthy AI is a dedicated goal by the EU Commission [2], and researchers intensively discuss the topic in fields like human factors engineering [1], e-commerce [4], human-robot interaction [5], and others. Research has shown that people develop trust relationships with technical systems, which influences when and how humans rely on machines and AI decisions. Clearly, building up trust is vital for technology adoption: in short no trust, no use. However, user trust must also not exceed certain levels, since overtrust (relying on a system when it is not appropriate) can have negative, as shown in multiple accidents with autonomous vehicle technology [6], even fatal, consequences. Consequently, measures must be developed to mitigate overtrust, ranging from technical solutions (i.e., displays presenting machine confidence or transparent explanations about machine decisions [6]) to educational initiatives for AI literacy. The overall goal of such methods is to empower users to prevent undesirable relationships with AI technology.
Too much trust can be fatal: Walter was driving a Tesla Model X P100D on autopilot when his car hit a barrier and then got struck by two other vehicles. The National Transportation Safety Board analyzed the case: Next to various environmental and technical reasons, the driver’s over-reliance on the autopilot was one factor that caused the accident. Before the crash, the 38-year-old engineer had been engrossed in a video game and trusted the autopilot to bring him safely to his destination, which he, unfortunately, never reached. Still, overtrust has been observed already before complex AI decision systems have pervaded into our lives. Another example of this problem is “death by GPS”, where drivers following technically incorrect navigation hints (i.e., through a river) got lost or died in accidents [11].
Since the level of trust influences how users interact with technology, overtrust (and over-reliance as subsequent behavior) leads to a faulty human-automation relationship.
Empirical research on overtrust and theory building is currently gaining momentum. For example, it was recently addressed in workshops at leading robots [7] or automotive [8] conferences, which revealed various elements of relevant future research agendas. Potential solutions to deal with the issue include design guidelines, technical concepts, but importantly also end-user agency, literacy, and educational initiatives [9].
Literacy and educational initiatives should rest on solid, evidence-based research from academia and then flow into practice, but the wheels of university research turn very slowly and incentives to prepare meaningful results and responsible recommendations for practice in bite-sized pieces are hardly present in rigid university systems. On the other side, the Big Tech companies, just like numerous startups, act, innovate, and network with the speed of light and frequently without regard for collateral damage that can be corrected later.
Enabling end-user agency to calibrate trust in AI could be one literacy and educational initiative that will give people the opportunity to develop skills to enhance their freedom and use technology in the way they truly desire [10].
In this track, we will discuss this topic with a wider audience.
References
[1] Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human factors, 46(1), 50-80.
[2] EUR-Lex, 2021 https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-laying-down-harmonised-rules-artificial-intelligence
[4] Al F Salam, Lakshmi Iyer, Prashant Palvia, and Rahul Singh. 2005. Trust in e-commerce. Commun. ACM 48, 2 (2005), 72–77.
[5] Peter A. Hancock, Deborah R. Billings, Kristin E. Schaefer, Jessie Y. C. Chen, Ewart J. de Visser, and Raja Parasuraman. 2011. A meta-analysis of factors affecting trust in human-robot interaction. Human factors 53, 5 (2011), 517–527.
[6] Holthausen, B. E., Wintersberger, P., Walker, B. N., & Riener, A. (2020, September). Situational trust scale for automated driving (sts-ad): Development and initial validation. In 12th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (pp. 40-47).
[6] Wintersberger, P. (2020). Automated Driving: Towards Trustworthy and Safe Human-Machine Cooperation/submitted by Dipl.-Ing. Philipp Wintersberger (Doctoral dissertation, Universität Linz).
[7] Aroyo, A., de Bruyne, J., Dheu, O., Fosch-Villaronga, E., Gudkov, A., Hoch, H., Jones, S., Lutz, C., Sætra, H., Solberg, M. & Tamò-Larrieux, A. (2021). Overtrusting robots: Setting a research agenda to mitigate overtrust in automation. Paladyn, Journal of Behavioral Robotics, 12(1), 423-436. https://doi.org/10.1515/pjbr-2021-0029
[8] Holthausen, B. E., Wintersberger, P., Becerra, Z., Mirnig, A. G., Kunze, A., & Walker, B. N. (2019, September). Third workshop on trust in automation: how does trust influence interaction. In Proceedings of the 11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications: Adjunct Proceedings (pp. 13-18).
[9] Ekman, F., Johansson, M., & Sochor, J. (2017). Creating appropriate trust in automated vehicle systems: A framework for HMI design. IEEE Transactions on Human-Machine Systems, 48(1), 95-101.
[10] Tschopp, M., & Sundar, SS. (2021) Enabling End-User Agency and Trust in Artificial Intelligence Systems. IEEE SA Beyond Standards. Retrieved December 2021 https://beyondstandards.ieee.org/enabling-end-user-trust-in-artificial-intelligence-in-the-algorithmic-age/
With Dagmar Monett
Marcel Salathé, Lenka Zdeborová, Carmela Troncoso, Chiara Enderle, Patrick Barbey, Thomas Wolf, Gunther Jansen, Laure Willemin, Simon Hefti, Arthur Gassner
10:00-12:00 March 28Auditorium A