AISG AI Governance Research Symposium 2023


The inaugural AISG AI Governance Research Symposium was held on Monday, 5 June 2023, at Capella Singapore, as part of Asia Tech x Singapore (ATxSG).

Panel 1, moderated by AISG’s Simon Chesterman, focused on managing the human-machine dynamic. Galit Shmueli from National Tsing Hua University, Araz Taeihagh from the Lee Kuan Yew School of Public Policy, and Yi Zeng from the Chinese Academy of Sciences discussed: (1) what circumstances or factors will encourage the responsible utilisation of AI; and (2) what guardrails or rules (targeting humans as well as machines) will discourage irresponsible uses or applications of AI.

Panel 2 focused on AI and healthcare, moderated by Jungpil Hahn. Ritu Agarwal from Johns Hopkins Carey Business School, Ngiam Kee Yuan from National University Hospital, Julian Savulescu from National University of Singapore, and Joseph Sung from Nanyang Technological University discussed (1) how we can manage the ethical and social implications of AI in healthcare; and (2) in addition to data protection and consent, what potential issues arise, and what solutions are available.

1. The human-machine dynamic

1.1      Understanding the dynamic

Galit Shmueli introduced three aspects of research into the human-machine dynamic: (1) behaviour modification techniques (nudges) that are being used on platforms to capture users’ attention and affect their behaviours on and outside of the platform, (2) the advertising model that is pervasive among most of the big platforms, and (3) the interactive and sequential adaptive algorithms (e.g., reinforcement learning) that are now being deployed on many platforms.

Prof Shmueli proposed that these three things have created and will continue to create challenges for individuals and for society as a whole, including social and political tensions such as through polarization exacerbated by algorithms. She suggested the use of “sandboxes” to test for future problems, analogous to clinical trials for pharmaceuticals, prior to widescale deployment of new AI systems.

1.2      When and how to regulate AI

Araz Taeihagh proposed that, first, we should move toward anticipatory and adaptive regulation, as opposed to responsive regulation, considering the issues such as severely increasing information asymmetry, power dynamics, the opacity of some new algorithms that are being used, and the pacing problem in our slowness in reacting and iterating, in the context of AI. We cannot just wait and see what happens and respond afterwards. It is important to leverage the ability of government to facilitate this process, and use that influence to steer in the right direction. Second, we should have sector-specific standards and problem-specific developments. For example, we should have a clear idea of how and who is responsible for regulating. Third, the field of computer science needs to develop a clear system of accreditation and certification similar to fields such as engineering, architecture and medicine. Forth, much broader engagement from other fields outside of computer science is needed for regulation of AI.

Zeng Yi had signed the Future of Life Institute’s open letter calling for a pause in development of advanced AI models. He hoped that, during the 6-month pause, there could be a more general safety framework developed, and people could have a more responsible attitude towards the future AI. In addition, Prof Zeng has been emphasising the importance of a safety framework, while many Chinese companies and universities are engaging to develop large languages models to catch up with GPT4. In his opinion, generative AI regulation is very important, as some models are not only providing information for people to make decisions, but also making decisions for human beings.

1.3      The need for perspective

AI governance is a complex field at the intersection of various disciplines, including computer science, ethics and philosophy, law and public policy, economics, information systems and data analytics, medicine and healthcare, and more. Therefore, it is essential to draw upon expertise from various disciplines to foster informed conversations, develop comprehensive AI governance policies and frameworks, and encourage responsible and ethical AI usage.

Prof Taeihagh, for example, had started working in public policy as an engineer. Prof Shmueli shared that in her business school, there is an institute of law and science and technology. She proposed that technology people working closely with law people made a huge difference. For example, it can change the way of thinking of problems in the long term.

Similarly, in the healthcare space, Ritu Agarwal shared that she has spent her entire professional career building multidisciplinary communities. Joseph Sung also proposed that more lawyers, general scientists, and psychologists should attend medical conferences. Otherwise, medical conferences would be focusing solely on clinical studies and evidence but lack the implementation side of technology.

Hahn Jungpil acknowledged that we all have a role in making sure cross-disciplinary conversations and collaborations happen. The organising committees of international conferences should direct our attention and try to call for papers and reach out to other communities.

2. AI and Healthcare

2.1      Opportunities

Joseph Sung shared that, first, AI can make diagnosis easier, faster, and more accurate; second, AI can be used to predict patients’ responses to drugs and improve management of diseases, particularly chronic illnesses like diabetes and hypertension; third, AI can help patients look after themselves and lead a healthier lifestyle.

Ritu Agarwal shared two use cases where AI is likely to have the most impact: personalised medicine, and telemedicine/remote monitoring. Other areas of opportunity include assisting patients with navigating the complexity of healthcare systems and services – similar to concierge services, and enhancing health literacy.

Ngiam Kee Yuan stressed that natural language processing is particularly relevant to healthcare because a lot of data is captured in texts, such as conversations between patients and clinicians. Large language models (LLMs) can be applied to almost every aspect of healthcare. Now, Prof Ngiam and his team are planning to launch a new LLM-based chatbot for patients at NUHS. The chatbot is expected to answer patients’ questions and reduce the information asymmetry between patients and healthcare organizations.

Julian Savulescu shared that AI is particularly advanced in radiology — where visual data, or images, are used to diagnose diseases. In addition, AI can help select human embryos within in vitro fertilisation (IVF) by identifying which embryo is most likely to achieve a successful pregnancy.

2.2      Benefits and risks

Prof Sung commented on the positive impacts of AI on patients. First, patients can be better informed in this era of AI-practiced medicine. For example, patients can directly consult ChatGPT, ask for a diagnosis, and even get a treatment. However, it would also be dangerous if patients totally rely on technology and get medicine that may not be appropriate. Second, AI can liberate doctors from mundane work like compiling reports, enable them to spend more time talking with patients, and thus improve the doctor-patient relationship. Therefore, we should be wary of the potential dangers and take advantage of the good aspects of AI.

Prof Savulescu believed that the biggest risk of AI in medicine is value misalignment. When using AI, we have to set a value. Problems arise when we put in one value — say, length of life — but what patients actually want is quality of life.

Prof Agarwal proposed that the greatest risk is bias and discrimination. AI will amplify biases in human decision-making and biases in data, and it will be the patients who will suffer. The algorithms can have a scaling-up effect: one biased doctor probably affects ten patients; a biased algorithm could affect hundreds of thousands of patients.

2.3      Regulating AI in healthcare

Prof Savulescu proposed that technology will always move faster than regulation. Therefore, we need a new model of regulation. For example, regarding data, instead of creating layer upon layer of data privacy legislation that is stifling innovation research, we need an authority that has the responsibility for ensuring the ethical use of data, and the ability to detect harm and compensate.

3. Areas for future research

3.1      Education

Education is key to foster responsible AI use. Prof Shmueli proposed that through education, we could have real guidance about what good and bad uses of AI are, and what consequences different uses could lead to. Prof Taeihagh stressed that, in order for people to be able to work well with machines, there needs to be a significant effort put into education. For example, in the medical field, education can help doctors understand how machines reach a conclusion, and reduce their resistance to AI. Prof Agarwal proposed that patients also need education (about AI). We need to have a national, an international conversation that brings up everybody’s IQ and knowledge and understanding of AI to be able to separate the false from the truth.

However, over-reliance on AI can have negative influences — in the same way that Google Maps destroys our ability to self-navigate, mobile phones destroy our ability to remember phone numbers, doctors’ and medical students’ over-reliance on AI can hurt their ability to understand anatomy, physiology, and the disease.

Medical education, for example, was ripe for an overhaul. Prof Sung proposed that instead of feeding students all the information, we should teach them the essential information: you cannot practice medicine without any knowledge of anatomy or physiology; at the same time, you need to know how to use data and how to differentiate noise from genuine information. In addition, we need to train doctors to use AI and learn how to work as a team with AI instead of talking about who’s going to replace who. That actually requires constant engagement as well as remodelling algorithms while using.

3.2      Alignment

The topic of value alignment between humans and AI emerged as a theme across both panels. While Prof Hahn remarked that AI, as a technology, is agnostic to values, the discussion also leans towards the understanding that there is some inherent value system within AI that influence its outputs. This inherent value system can thus conflict with human-based goals.

Such concerns have widespread implications. Prof Zeng remarked that there is a need to inform the public how AI generates its outputs and to moderate the language sufficiently such that AI does not appear overly confident, underscoring his view that understanding AI’s decision-making process is necessary before AI should play a bigger role in society. This is particularly relevant given the potential “existential risk” of AI misalignment. Even today, since AI does not truly grasp what it means to be human, any misalignment might result in far-reaching consequences.

In healthcare, the question of understanding AI’s values and its alignment with users takes on a more urgent significance. Prof Savulescu highlighted the risk of value misalignment in healthcare, where there is a need to be clear on the values that we imbibed into AI, what value AI optimises, and how they align with our values. Prof Agarwal similarly echoed this sentiment when she mentioned that with explainable AI, users can make an informed decision on whether the recommendations of the algorithm are consistent with their values. Ultimately, “machine paternalism”, where the patient is unaware that treatments suggested by AI may not align with their values should be avoided.

The uncertainty surrounding questions of the Human-AI alignment thus necessitates further research on how we can ensure that AI aligns with our values, where related research on determining the values that AI itself is intrinsically optimising should be explored in tandem as well.

3.3      Human-AI interaction

In parallel with discussions about potential misalignment between humans and AI in theory, both panels considered how humans will in interact with AI in practice. Prof Zeng noted that AI’s output might need to be moderated such that it doesn’t appear overconfident. His comments underscore a risk that the general public might mistake AI’s outputs as inappropriately authoritative, given the hallucination-based issues that currently beset AI —what Prof Chesterman suggested might better be described as “confabulations”.

In addition, there is also a need to understand AI more fundamentally, including making philosophical, yet practical, inquiries into the nature of intelligence, the self, and moral intuition as it applies to AI, before they become “quasi-members” of society. This is particularly relevant as AI is making mistakes on sensitive issues, such as on race and gender, that humans “cannot get wrong”.

In healthcare, the question of how AI can play the role of an effective team member remains to be fully explored, with research centred around clinical applications. While there are concerns that doctors might view AI as a competitor or that doctors may mistrust AI due to a lack of transparency in how AI arrived at its conclusions, Prof Agarwal highlighted the need to move on from the narrative of AI replacing doctors to a discussion on synergistic behaviours — how AI and doctors can team together and augment each other. Moreover, there were also discussions of how AI can be used beyond direct patient care, to a more organisational role within healthcare systems themselves.

While these concerns do reflect a strong need to regulate AI and ensure that AI is safe for humans, they also reveal a more fundamental curiosity to understand AI’s role within human systems, from AI as it relates to the individual to how AI can play a role in teams, organizations, and even our society.

3.4      Advertising

Advertising models, especially those deployed on social media platforms, employ behaviour modification as tool to garner and retain people’s attentions. This is driven by interactive, iterative, and adaptive algorithms that learn from user behaviours as they interact with the content and engage in feedback loops with the platform.

According to Prof Shmueli, this complex feedback loop renders the study of these algorithms quite challenging. It would require the researchers to have access to the actions of the users as well as the algorithms in order to study the effects. On top of the complexity of the data, Prof Shmueli highlighted the lack of access to such proprietary data since companies do not want to collaborate with academics unless there are ties. Finally, even if researchers do manage to work closely with a company, exhaustive data is difficult to come by since platforms may not be actively logging all the data streams. Thus, the subliminal messaging and behaviour modification through design of psychologically and behaviourally-driven “nudges” are ill-regulated. With the proliferation of more and more powerful algorithms, the situation will have serious ramifications for individuals, communities, and countries and thus is an urgent field for more research and collaboration between academics, companies, and users.

The flip side to AI-driven advertising and behaviour modification is that the economic models of search engines is based on advertisements. With the advent of large-language models and people’s increasing reliance on them for answers instead of searching on browsers, the afore-mentioned model may need to be updated. That line of research, Prof Chesterman stressed would be compelling and bring out its own set of regulatory challenges.

3.5      Testing and acceptance

Prof Ngiam observed in his years of research in developing AI models for healthcare applications that none of the AI tools they trained exceeded doctors’ performance. The reason, he surmised, was that the training data came from the doctor: if a doctor has an error rate of 10%, the error rate of AI would be 10%. Furthermore, despite advances in the models’ architectures, training techniques, and access to data, AI models typically demonstrate a degradation in performance as compared to their results in a controlled “lab” setting.

This phenomenon may stem from three factors. The first is that there could be bias that arises due to lapse of time between training and deployment of the model. Secondly, it is challenging to anticipate the behaviour of the doctors who are to use the models — i.e., their degree of reliance on the AI decision. Thirdly, the data used to train the models may not be representative of the real-world data, i.e., there is a shift in the domain of the data. In healthcare, the equipment used to measure data is one of the many ways in which the same sort of data may have differences that could render the model useless. This is exacerbated by the fact that each hospital records data in different ways and thus a model that works well in one hospital is not guaranteed to work as well in another.

In order to increase the acceptance of AI models in healthcare and to regulate them effectively, steps must be taken to ensure that models can be generalized across domains in the same application and that data generation systems across hospitals are in alignment. This is a challenge and area of research for AI technology as well as regulating bodies who must work in tandem.

Prof Agarwal raised another perspective on this issue by stressing the need for tests of AI algorithms in sandbox environments before deployment. This is especially important for the “edge” cases where the system is required to make predictions or recommendations for patients that have unique characteristics not captured in training data. The use of synthetic data may be particularly valuable here. Prof Savulescu echoed the same, stating that although AI models used to classify the viability of embryos in IVF show impressive performance, there were no clinical trials, so embryologists didn’t know that such AIs would do better in the real world.

The market introduces structural challenges, with companies needing to market their AI models as soon as possible, sometimes bypassing safety checks. There is no requirement by regulatory agencies in many jurisdictions to do clinical trials because an AI system is neither a medical device nor a pharmaceutical. Instead, it is a decision dynamic. For greater acceptance and safety, AI models used in healthcare should undergo clinical trials and be certified for production use by regulatory agencies or trusted third parties.