Governance And Policy Design Lessons For Building Trust In AI: Perspectives From Singapore And Asia

Project Description

The proliferation of artificial intelligence (AI) systems has led to disruptive changes impacting daily life, governments’ public service delivery, and decision making. However, the operation of AI systems may be compromised by risks of malfunctioning, malicious attacks, and software or hardware failures. AI applications also pose risks of safety, privacy, liability, and adverse employment consequences, and ethical concerns regarding loss of autonomy, deception, and social justice. Due to the risks and ethical concerns about AI applications, trust in AI is a key determinant to promote their adoption and diffusion.

Effective policy design is vital to ensure that AI is beneficial, stable, safe, and well-regulated. Trust in a new technology depends on various factors like human and environmental characteristics and the nature of the technology. While trust-building in AI has been a significant goal in government initiatives, there is a need for better integration of expert opinions and public perception on trust in AI. Formulating legislation and regulations requires insights on risks of AI, public acceptance, and difficulties in regulating AI. From a policy design perspective, this project aims to contribute to the literature on governance of AI through mixed-method research in four phases that focus on trust in AI.

We will conceptualise trust in AI and identify factors to gauge public trust in AI that will serve as a foundation for examining factors such as trust in technology and trust in government to manage the risks. To apply the theoretical framework, we will collect primary data from a multi-city survey in Singapore, Seoul, and Tokyo. To supplement public perspectives, we will also approach experts to identify essential factors and policy measures to ensure and maintain public trust in AI.

Policy design involves choosing appropriate policy instruments, and to have more nuanced perspectives of these, expert surveys will provide insights on regulations for the safe operation of AI systems. We will conduct an online experiment involving respondents from Singapore to examine the institutions and regulatory arrangements that would enhance their trust in AI. The experiment will focus on high-risk AI applications in critical areas like surgery, public service provision, law enforcement, and justice. The final phase would consolidate qualitative and quantitative research findings from the previous phases to suggest policy instruments to build and enhance trust in AI, particularly in Singapore.

Research Technical Area

  • Governance of AI and Autonomous systems
  • Policy Design
  • Technology Policy
  • Trust in AI

Benefits to the society

This study will contribute to policy design for the governance of AI. The unique approach by combining views from the public, experts and studying existing measures taken by the government through interdisciplinary research will fill the knowledge gap for the governance of AI. Since AI has caused and future applications will cause disruptive changes in the relevant sectors, this research will contribute to foreseeing and managing the ensuing changes. Singapore has been a key testbed for smart city solutions in the region, where AI plays a key role. The analysis for identifying factors affecting trust in the use of AI and the population level survey will generate significant policy lessons for efficient adoption of AI through 1) a holistic approach towards identifying factors impacting trust in AI, 2) exploring perceptions of the society that will throw light on their concerns and measures that need to be taken by the government to build or enhance trust in AI applications, 3) assembling expert views on trust-related factors for accelerating adoption of AI and continued trust in their use.

The research will have theoretical contributions to the academic literature by proposing a governance-based framework to study facets of trust in AI. The policy design approach will provide recommendations on the government’s tools to make AI more trustworthy. This will also have practical contributions by supplementing the AI Readiness index proposed by AI Singapore. The AIRI includes the four pillars of organisational readiness, business value readiness, data readiness, and infrastructure readiness. Since our research will focus on the governance of AI, the findings will help in the addition of trust, especially trust in the government, to study organisations’ readiness in adopting AI. This will also have practical contributions by introducing measures to gauge trust in high-risk AI systems organisations use and providing recommendations for building policies to enhance trust. The focus on high-risk AI systems like safety components, surgical robots, and AI applications to administer justice will be a unique contribution to the literature by understanding user intentions and expectations from AI systems. The findings from our research will benefit multiple stakeholders like developers and designers of AI to introduce changes based on user demands; and regulators and policymakers on tools that they can employ to improve trust in AI systems.

Project’s Publications

Team’s Principal Investigator

Araz TAEIHAGH
Lee Kuan Yew School of Public Policy
National University of Singapore

Araz Taeihagh is Head of Policy Systems Group and Chair of PhD Programme at Lee Kuan Yew School of Public Policy, PI at  Centre for Trusted Internet and Community, and NUS Cities at the National University of Singapore, and a Visiting Associate Professor at Erasmus University Rotterdam. Taeihagh is member of the editorial boards of Technological Forecasting and Social Change, Transport Reviews, and Policy and Society, among others. His research is on Comparative Public Policy, Policy Design, and Governance of Disruptive Technologies. Taeihagh earned his D.Phil. researching novel approaches for development of policies for socio-technical systems at University of Oxford.

In 2021 and 2022 Taeihagh was ranked in the top 2% of scientists worldwide for citation impact in 2020 and 2021 in areas of Energy and Political Science. Taeihagh is the winner of the best paper of the year award from Transport Reviews in 2020 for governing autonomous vehicles. On the 50th anniversary of the earth day, Elsevier selected his work on the study of four decades of evolution of policy mix for sustainable energy transition in China published in Applied Energy as one of 50 foundations and future of energy research papers on the earth day in April 2020. Taeihagh received the Research Excellence Award from LKYSPP in 2019, 2020, and 2021 and is the recipient of the LKYSPP writing fellowship 2022 and NUS HSS faculty research fellowship 2023.

The Team

Collaborators

Martin DE JONG, Erasmus University
Research Focus: Cross-national comparison, Inclusive Technology, Cross-cultural management and governance, China

Gregory TRENCHER, University of Kyoto
Research Focus: Governance of innovation and socio-technical transitions, smart cities and Japan