
How does public trust matter in governing high-risk AI Systems
Written By:
Assistant Professor Araz Taeihagh
Lee Kuan Yew School of Public Policy
National University of Singapore (NUS)
A key milestone in AI policymaking was the European parliament’s approval of the EU-AI Act in June 2023. The proposal for the Act aimed to implement the development of an ecosystem for developing and building trust in AI while managing risks for the operation of AI systems. It followed a proportionate risk-based approach by utilising risk levels (low-risk, limited or minimal risk, high-risk, and unacceptable risks). Within the classification of AI systems, high-risk AI systems form a peculiar class with their high risk to individuals’ health, safety, and fundamental rights. Governing these AI systems is a policy challenge for governments due to their potential unexpected and unintended consequences in case of adoption (Taeihagh, 2021). Hence, examining the current proposals for safety and public trust in their functioning is key for their governance.
What are high-risk AI systems?
Title II, Chapter 1 of the proposal lists two key categories of high-risk AI systems:
- AI systems that will be used as safety components of products and are subject to third-party conformity assessment.
- Stand-alone AI systems with specific implications for fundamental rights. These are listed as AI applications for specific purposes in Annex III of the proposal.
Annex III is particularly interesting, with eight specific purposes currently mentioned that can be expanded further according to the proposal. The eight high-risk AI categories comprise those used in the following areas:
- biometric identification and for categorisation of persons;
- managing and operating critical infrastructure such as road traffic and supply of gas or water;
- educational and vocational training to determine access to individuals or assessing students, recruitment of persons or making decisions regarding employment or promotions;
- access to enjoying essential private and public services such as evaluating creditworthiness or prioritising emergency response services;
- law enforcement for risk assessment of persons, polygraphs or such tools to detect emotional states of persons, profiling of persons, or evaluation of reliable evidence;
- migration and border control management for assessing security or health risks, detecting unoriginal documents, examining asylum applications or visa and residence permits, and
- administering justice and applying law to a set of facts.
The key requirements for high-risk AI systems outlined in the proposal are technical robustness for resilience against risks, appropriate design so that human oversight is possible when they are functioning, record-keeping and transparency, and cybersecurity. Conformity assessment of high-risk AI systems is required before they become operational.
Why is trust important for policymaking for these systems?
Developing and increasing trust in artificial intelligence systems has been a crucial factor and a necessary condition for their use (He et al., 2020). While measures have been proposed to gauge trust in technologies (Mcknight et al., 2011), several conceptualisations for trust have been advanced for AI (Bedué & Fritzsche, 2022; Gille et al., 2020). Examining and gauging the public trust in new and emerging technologies has been significant since they would be impacted directly. Research also shows that public perceptions are important for the policymaking of new technologies as they can help in understanding the preferences and demands of the public or involving them in decision-making (Hisschemöller & Midden, 1999).
How can trust in high-risk AI be gauged?
Trust in high-risk AI is dynamic. The initial trust people have in high-risk AI evolves after repeated interactions with the technology and surrounding actors. However, developing a good “first impression” is crucial for people to accept high-risk AI as trustworthy. Scholars and scientists have developed various methods to understand how people develop initial trust. Three factors seem to be highly relevant. First, at an individual level, people generally have different tendencies to trust or distrust new technologies. This can differ based on personal characteristics, gender, or age. Some people might be naturally more inclined than others to ride in a driverless car or use an unmanned kiosk at the airport than others. Younger people, for example, are more likely to take more risks and try out new technologies compared to older people. Second, the functions of high-risk AI itself matter. Intuitively, technologies that people perceive are more reliable, useful, and less harmful are easier to trust. So, people might trust a robot to deliver food at your table at a local restaurant more than a robot to conduct your surgery at a local hospital. Third, people develop higher initial trust if they believe the institutions that develop and govern high-risk AI are trustworthy. This places responsibility on tech companies, governments, and regulatory institutions to engage with the public and design a robust policy ecosystem that permits people to believe that rules are in place if something goes awry.
Measuring the level of public trust in high-risk AI systems, therefore, requires measuring their natural disposition to trusting new technologies, understanding the general faith they have in high-risk AI being reliable and useful, and evaluating the degree of trust they place on actors and rules that design, administer and govern high-risk AI.
What can be done for governing high-risk AI systems with the understanding of trust?
Given the wide application and use of high-risk AI systems, examining trust in these systems and the associated actors through public surveys is a starting point. Several actors exist in the ecosystem for the operation of high-risk AI systems, like corporations, governments, and civil society members. Actors undertake various measures to improve trust in AI systems, such as red-team exercises (where members within the organisation attack systems to expose their vulnerabilities), auditing AI systems, and sharing data on accidents and incidents. Collecting information on public preferences on these actors and the various measures undertaken to improve the trustworthiness of high-risk AI systems can be done through large sample-size surveys.
Which actors do the public trust to develop, operate, and oversee the operation of high-risk AI systems? What policy instruments would be useful to build and sustain trust in these systems? Empirically investigating public trust in high-risk AI in different jurisdictions will provide insights into the factors that would matter to the public in adopting high-risk AI systems. In addition, the current proposed practices of regulation of these can be presented to the public, not only creating more awareness but also collecting their preferences.
REFERENCES
Taeihagh, A. (2021). Governance of artificial intelligence. Policy and society, 40(2), 137-157.
Bedué, P., & Fritzsche, A. (2022). Can we trust AI? An empirical investigation of trust requirements and guide to successful AI adoption. Journal of Enterprise Information Management, 35(2), 530–549.
Gille, F., Jobin, A., & Ienca, M. (2020). What we talk about when we talk about trust: Theory of trust for AI in healthcare. Intelligence-Based Medicine, 1, 100001.
He, H., Gray, J., Cangelosi, A., Meng, Q., McGinnity, T. M., & Mehnen, J. (2020). The Challenges and Opportunities of Artificial Intelligence for Trustworthy Robots and Autonomous Systems. 2020 3rd International Conference on Intelligent Robotic and Control Engineering (IRCE), 68–74.
Hisschemöller, M., & Midden, C. J. (1999). Improving the usability of research on the public perception of science and technology for policy-making. Public Understanding of Science, 8, 17–33.
Mcknight, D. H., Carter, M., Thatcher, J. B., & Clay, P. F. (2011). Trust in a specific technology: An investigation of its components and measures. ACM Transactions on Management Information Systems (TMIS), 2(2), 1–25.