AI Governance Research Grant Call 2023

The AI Governance Research Grant Call 2023 aims at funding multi-disciplinary research that reflects novel ideas/visions that are underexplored, and/or also have a clear and significant social impact.

AI Singapore invites proposals in advanced research focusing on the following themes:

Theme One: Trustworthy AI – Should we trust AI?​

Transformative technology offers tremendous opportunities but raises ethical concerns and the potential for harm. We invite research that mitigates the risks of AI through promoting fairness, accountability, transparency, ethics, safety, and privacy. Governance is broadly understood to include laws, markets, networks, standards, and other tools.

Theme Two: Trust in AI – Will we trust AI?

The utilisation of AI will be enhanced by faith that the end-to-end process is robust and accountable. We invite interdisciplinary research into understanding the factors that shape perceptions of human-machine interaction, influencing the adoption of beneficial AI.

What are we looking for?

  • Proposals that bring non-traditional perspectives on questions of AI Governance, e.g. from the humanities and social sciences. This might include perspectives from anthropology, communications, economics (including behavioural economics), education, law, neuroscience, philosophy, political science, psychology, and sociology, among other fields.
  • Technical proposals are also welcome, but those with an interdisciplinary focus are more likely to be supported.
  • Proposals that build on existing work in this broad area but bring a regional focus to bear. This might include examining the impact of different histories/cultures/social institutions on the broad questions of AI governance.
  • All proposals should focus on a specific question and have a clear research focus.
  • Proposals should be specific about how they relate to AI or to specific AI technologies, rather than technology in general.
  • Proposals should include clear descriptions of the industry or policy implications of the proposed research.
  • For this call, AI Singapore is particularly (but not exclusively) interested in proposals that address the following focus areas:

“A lie can travel halfway around the world, while the truth is still putting on its shoes”. Combating disinformation has become even more challenging as it becomes supercharged by generative AI. For example, generative AI may reduce the cost of producing personalised and compelling disinformation at scale, as opposed to current approaches that count on a small amount of content to go viral. Its ability to generate even more realistic “deepfakes” and impersonate human speech and writing styles also makes it more difficult to detect disinformation. To protect our online spaces, we need a deeper understanding of how the nature of disinformation would evolve with generative AI and how it would impact society. We also need to consider innovative and out-of-the-box solutions to be able to counter the spread of super-charged disinformation. Some key questions include:

a) What is the scale and texture of potential harms from automated generation of (i) misinformation (ii) deepfakes and (iii) malware at scale in a way that generative AI is ideally suited for?

b) What are the safeguards to mitigate risks while allowing innovation?

c) Does labelling AI-generated content make a difference to outcomes in terms of risk mitigation? How should this be done to maximise impact? 

The development of objective evaluation methods for AI is challenging as there are many subjective factors involved in assessing the quality of AI-generated content. Abstract ethical concepts like fairness requires consideration of the social and cultural context in which the AI is being used. Furthermore, ethical values can be contested or evolving (e.g. abortion). Evaluation methods need to be able to adapt to such shifts in societal values, and address instances where there is no consensus on what constitutes a universal “good”. We invite researchers to investigate the challenges of evaluating the quality of AI and develop reliable and effective evaluation methods. Some key questions include:

a) How do we define fairness – algorithmic fairness versus public perception of what is fair?

b) How do we bridge the gap (both ways) so that people will responsibly trust AI and know how to assess it critically?

c) How do we factor ethical considerations, spanning universal truths to contested and evolving values, dynamically into responsible AI evaluation methods to govern AI models effectively?

Given that ChatGPT can be a supportive tool or a crutch, educators need to be very thoughtful about how the technology is used to enhance learning (e.g. used by students as an information gathering tool so that they can focus on higher-end skills like creativity), while putting in place guardrails to minimise mis-use by students (e.g. over-reliance on ChatGPT/ plagiarism compromises development of critical thinking and problem solving skills).

At the workplace, employers also need to consider how generative AI would impact workflows, potential risks of deskilling, and the need to upskill their workers to fully realise the productivity gains.  For instance, with Codex taking care of routine coding, tech companies would need to redefine the job scope/requirements of a coder that will focus on more creative and innovative tasks and ensure that essential coding skillsets would not be eroded by the adoption of Codex. To understand how ChatGPT and generative AI tools can best be used to enhance education and skills development, some key questions include:

a) How should the education sector use ChatGPT and other generative AI tools to ensure that they support education objectives (e.g. supplement and enhance learning), and not cripple them in the long term (e.g. removing the need for critical thinking/de-skilling)?

b) What are the new dynamics and risks to skills development when AI is integrated into work? How do we measure its impact on skills development and proficiency?

c) How should governments/companies approach the adoption of generative AI as a workforce productivity tool without leading to loss of essential skills in the long term?

Other Highlights

  • Principal Investigators (PIs) should have a full-time appointment at a Singapore-based Institute of Higher Learning (IHL) or Research Institution (RI). They should also have a track record of leadership ability in coordinating research programmes and providing mentorships to the research teams, as well as having productive research outcomes.
  • The Programme will support each project for a duration of up to 3 years. Funding for each proposal will be capped at a maximum of
    S$ 300,000 for single Principal Investigator (PI) projects and S$ 800,000 for multi-PI/discipline/institution projects (inclusive of 30% IRC).
Funding (Inclusive of 30% IRC)CategoryDuration
Tier 1$300,000Single PIUp to 3 years
Tier 2$800,000Multiple PI
Inter-Disciplinary
Inter-Institutional
Up to 3 years

AI Governance Research Grant Call 2023 is now Closed.

Schedule

ItemKey Dates
Announcement of the Grant Call12 April 2023
Opening date for softcopy submission (via portal)12 May 2023 
Closing date for endorsed proposal submission (via portal)30 June 2023 
Evaluation and selection of proposals  July – November 2023
Release of outcome and award  November 2023
Project commencement 1 January 2024

Guides & Documents

For any enquiries, please contact governance@aisingapore.org .

AI Governance Research Grant Call Application Portal is now closed.