In the digital age, the proliferation of harmful online content, particularly memes, poses a significant challenge. Existing hate speech detection systems, primarily text-based, struggle to effectively tackle the complex nature of memes, which combine images and text. The urgency of addressing this issue is underscored by initiatives like Meta’s Hateful Meme Challenge.

In Singapore’s diverse context, the challenge is heightened by localized meme content, including Singlish, various regional languages, and specific cultural references. This necessitates the development of sophisticated, multi-lingual, and context-aware detection systems capable of interpreting such nuanced content. These technologies must be globally applicable and finely tuned to local cultural intricacies, a critical step in preventing the escalation of online hate and ensuring the stability and cohesion of diverse communities. Research must prioritise creating systems adept at handling the multifaceted nature of harmful memes, particularly in low-resource environments.

AI Singapore is excited to launch the Online Safety Prize Challenge, a 10-week competition that aims to advance AI research in developing multimodal, multilingual, and zero-shot models. These models are expected to discern effectively between benign and harmful memes, focusing on the diverse and nuanced Singaporean digital landscape. Our goal is to foster safer online interactions worldwide, particularly in regions with limited data on harmful content.


The Challenge aims to identify harmful online content, specifically memes, that attack individuals or groups based on characteristics like ethnicity, race, religion, gender identity, and more. This content, often culturally contextual, requires participants to develop an end-to-end AI solution to differentiate harmful from benign memes. The lack of existing datasets, particularly outside the Western context, and the real-world scarcity of resources make this a zero-shot challenge.

The Challenge aims to develop end-to-end classification techniques for identifying harmful memes containing social bias, a subset of harmful online content. Participants must innovate by sourcing or creating datasets, using data augmentation or automated meme generation. This initiative offers a unique opportunity for academics and innovators to advance AI research in online safety, navigating the nuanced and diverse online communication landscape.


  • Researchers and Industry Professionals from around the world
  • Individuals interested in Online Trust and Safety, with experience in Natural Language Processing, Computer Vision, Machine Learning, and/or Deep Learning


Teams must submit their model as a containerised docker file, adhering to predefined input-output formats set by AI Singapore. Submissions will be evaluated based on relevant metrics and ranked on a public leaderboard.


Start Date: 31 January 2024
Launch of Submission Portal: 14 February 2024
Deadline for Team Formation: 20 March 2024
Deadline for Submission: 11 April 2024
Presentation & Award Ceremony: 13 May 2024


  • Any individual who is at least 18 years of age can form/join a team to participate in the challenge
  • A team can comprise 1 to 6 members
  • An individual may not be on more than one team


The top 5 teams on the public leaderboard will be selected to showcase their solutions at the finals, which will be held during the 2024 ACM Web Conference (Singapore). Each finalist team will receive a complimentary pass to attend the 2024 ACM Web Conference. Three teams will be declared the winners of the challenge and awarded the following cash prizes:

  • First Place: USD 30,000
  • Second Place: USD 15,000
  • Third Place: USD 7,500