The proliferation of artificial intelligence (AI) systems has led to disruptive changes impacting daily life, governments’ public service delivery, and decision making. However, the operation of AI systems may be compromised by risks of malfunctioning, malicious attacks, and software or hardware failures. AI applications also pose risks of safety, privacy, liability, and adverse employment consequences, and ethical concerns regarding loss of autonomy, deception, and social justice. Due to the risks and ethical concerns about AI applications, trust in AI is a key determinant to promote their adoption and diffusion.
Effective policy design is vital to ensure that AI is beneficial, stable, safe, and well-regulated. Trust in a new technology depends on various factors like human and environmental characteristics and the nature of the technology. While trust-building in AI has been a significant goal in government initiatives, there is a need for better integration of expert opinions and public perception on trust in AI. Formulating legislation and regulations requires insights on risks of AI, public acceptance, and difficulties in regulating AI. From a policy design perspective, this project aims to contribute to the literature on governance of AI through mixed-method research in four phases that focus on trust in AI.
We will conceptualise trust in AI and identify factors to gauge public trust in AI that will serve as a foundation for examining factors such as trust in technology and trust in government to manage the risks. To apply the theoretical framework, we will collect primary data from a multi-city survey in Singapore, Seoul, and Tokyo. To supplement public perspectives, we will also approach experts to identify essential factors and policy measures to ensure and maintain public trust in AI.
Policy design involves choosing appropriate policy instruments, and to have more nuanced perspectives of these, expert surveys will provide insights on regulations for the safe operation of AI systems. We will conduct an online experiment involving respondents from Singapore to examine the institutions and regulatory arrangements that would enhance their trust in AI. The experiment will focus on high-risk AI applications in critical areas like surgery, public service provision, law enforcement, and justice. The final phase would consolidate qualitative and quantitative research findings from the previous phases to suggest policy instruments to build and enhance trust in AI, particularly in Singapore.