Trust Experiments – Measuring Trust in AI As Actions, What Raises And Reduces Trust, And What Innovators And Organisations Can Do To Design Future Trusted AI
Trust in AI is a topic that has been on the rise since AI-enhanced technologies became common in a variety of domains. Trust-based human-AI interactions are becoming increasingly common in public services (e.g., chatbots), healthcare, marketplaces (e.g., e-commerce), education, and financial services. However, little is known whether we truly trust AI or what can be done for us to trust AI, at least in Singapore context. Therefore, this is an interdisciplinary project that measures public trust towards AI technologies and organisations producing AI technologies. We introduce trust experiments – based on experimental economics and organisational studies – to measure trust as actions instead of perceptions to build evidence base frameworks that researchers, policy makers, and company leaders can use to study, design, and implement future trusted AI. The experiments will enable us to understand human-AI interactions, individual traits, and organizational factors that raise or reduce public trust placed in AI technologies, and on organizations developing and providing AI solutions. These include identifying individual and organizational determinants that promote trust and trustworthy interactions with AI. Overall, our study aims to advance our understanding of public trust in AI by moving beyond conceptualising “trust as a perception” to “trust as concrete actions and decisions” that individuals take when faced with AI.
Research Technical Area
Game theory in experimental economics
Benefits to the society
Our research advances the scholarly body of work as it integrates multiple disciplines across individual and organisational scales to understand public trust in AI. For individuals, evidence produced through the application of experiments will give greater assurance that AI they adopt, and use are fair and trusted. AI Innovators and organisations can use our findings to develop trusted AI, build the reputations and communications needed for others to trust them as AI providers; and adapt our experiments/methods to test, pilot, evaluate, iterate, and scale AI for trustworthiness. Governments/Policymakers can use the findings to systematically evaluate the efficacy of existing standards, frameworks, and policies; and to adapt our experiments/methods to test, pilot, evaluate, iterate, and scale future standards, frameworks, and policies.
The combined benefits of all of the abovementioned stakeholders will enable to reap the benefits of the digital economy and society sooner than later.
Team’s Principal Investigator
POON King Wang
Singapore University of Technology and Design
POON King Wang is the Director of the Lee Kuan Yew Centre for Innovative Cities at SUTD. He is concurrently SUTD’s Senior Director of Strategic Planning.
He is on the World Economic Forum’s Expert Network on Cities and Urbanisation, Konrad Adenauer Stiftung’s Strong Cities 2030 Network, and Global Partnership on AI’s Future of Work working group.
His teams’ research is recognised in Singapore’s National AI Strategy for building a trusted progressive AI environment. With IMDA/PDPC, they launched Singapore’s first human-centered industry-agnostic A Guide to Job Redesign in the Age of AI.
He recently received the Public Administration Medal (Silver).
Co Principal Investigators
Dr. Dinithi Nilanga JAYASEKARA, Singapore University of Technology and Design
Research Focus: Applied econometrics, Economic growth and long-run development, Behavioral economics
Dr Assel MUSSAGULOVA, Singapore University of Technology and Design
Research Focus: Organisational behaviour, Human resource management
Thijs WILLEMS, Singapore University of Technology and Design
Research Focus: Organizational ethnography
Prof Tony Quek, Singapore University of Technology and Design
Research Focus: Wireless communications and networks, network intelligence, 6G, URLLC, security, and IoT