
Trustworthiness of AI for autonomous vehicles: What the public would look for
Written By:
Professor Shirley S. Ho
Associate Vice President
Humanities, Social Sciences & Research Communication
Nanyang Technological University (NTU)
Dr. Goh Tong Jee
Research Fellow
Wee Kim Wee School of Communication and Information
Nanyang Technological University (NTU)
Media features of enhanced capabilities of artificial intelligence (AI) for autonomous vehicles (AVs) are interspersed with reports on crashes involving the mobility technology. Such turn of events sets us pondering, “What makes AI for AVs trustworthy among the public?”.
Singaporeans and technology developers shared their views in a series of focus group discussions for a study on trustworthiness of AI for Avs conducted by our research team at Nanyang Technological University.
The study aims to develop dimensions of trustworthiness of AI and stakeholders involved in introducing AI for AVs. The roles that trustworthy AI plays in raising acceptance of AVs is the impetus of this study. The dimensions are based on the responses of 56 members of the public and nine Singapore-based academics and technology developers from the industry.
Trustworthiness of AI for AVs
From the perspective of the public, trustworthy AI for AVs should display functionality, reliability, helpfulness, controllability, and explainability. They expect functional and reliable AI to complete tasks with consistency and accuracy. With safety concerns looming large, AI that possesses emergency-response functionality (e.g., dial for medical assistance) would add to their trustworthiness. Relatedly, drivers considered helpfulness of AI in providing follow-up assistance in the aftermath of emergencies (e.g., generating accident reports). They believe this would be useful for settling liability issues. Human’s desire to be in the loop of AI systems’ decision making in the event of imminent accidents translated into an emphasis on controllability of AI. In relation to the “black box” problem of AI, which is characterized by the lack of transparency on decision-making, explainability would raise the trustworthiness of AI.
The expert participants thought that prospective riders of AV would prioritize controllability, reliability, and explainability. They had more to add for functionality and human-likeness.
Having in mind the current functionalities of AI for AVs (e.g., cruise control), experts opined that an added functionality of processing information of interiors and exteriors of vehicles to enhance personal safety would raise trustworthiness of AI among the public further.
In relation to established standards of safe driving of conventional vehicles (e.g., braking gradually), one technology developer commented that the public would deem AI that displays human-likeness to be trustworthy. The contexts for this quality are AI braking an AV hard and abrupt lane-changing. The technology developer stressed that although AI can perform such maneuvering tasks with ease, deviations from safe driving behaviors would affect trustworthiness of AI.
Trustworthiness of Stakeholders
The study also placed the spotlight on trustworthiness of policymakers, technology developers, and media organizations—stakeholders involved in introducing AI for AVs.
The stakeholders’ ability in their expert domains, benevolence toward target audiences, and integrity in their operations were key considerations of trustworthiness among the public. Public communication and collaboration are additional dimensions of trustworthiness of policymakers and technology developers. To the public, public communication entails reaching out to prospective users of AVs with updates on policymaking and research & development, as well as gather feedback on product usability. Additionally, stakeholders who collaborate with one another to strengthen their operations would gain trustworthiness.
A noteworthy dimension of media organizations’ trustworthiness is affiliation. The basis for this dimension is the belief that forging professional ties with key stakeholders would enhance media organizations’ rigor in communication. If media organizations gain trustworthiness by leveraging their institutional authority and cognitive authority (i.e., knowledge or expertise that can influence opinions and behaviors), the social milieu would be able to facilitate dynamic exchanges among the stakeholders and the public on the topic.
The technology developers in the focus group discussion emphasized ability, integrity, and acclaim as important qualities that would influence trustworthiness of the three stakeholders among the public.
Pertaining to AI technology, policymakers’ trustworthiness would depend on their ability to understand how the technology operates and on the integrity of their regulatory framework for implementing AI in AVs. From the technology developers’ perspective, a robust regulatory framework sets boundaries for safe and responsible implementations without curtailing innovations.
Technology developers’ trustworthiness rests on their ability to adapt AI technologies for the AV context as well as on their acclaim. The stiff competition that technology developers face is the premise of the point on the efficacy. As some firms arguably enjoy greater popularity from their strong track records, acclaim would give them an edge in gaining trustworthiness.
The public shared the same perspective on acclaim and stakeholders’ trustworthiness.
As conduits of information on AI for AVs, media organizations would gain trustworthiness from their ability to create varied content and integrity of their operations. According to the experts, media organizations that produce creative content to raise awareness of what AI can do in the context would be deemed as trustworthy. Meanwhile, they should maintain oversight of productions of objective and evidence-based content.
Evidently, developing AVs involves both trustworthy AI and trustworthy stakeholders. As the quest for fully AVs continues, public expectations of trustworthiness cannot be ignored.