Get SGD58.85 off your premium account! Valid till 9 August 2021. Use the Code ‘SGLEARN2021’ upon checkout. Click Here

Seeking 2 Postdoc Fellows for New AI Governance Pillar

AISG is offering two Postdoctoral Fellowships to work in its new AI Governance pillar. Candidates building a career in the area of AI Governance (broadly understood) are encouraged to apply. Successful applicants will become Fellows of AISG, with the possibility of affiliation with another Faculty of NUS.

Successful candidates will commence their Fellowships in August 2021 and conclude them in July 2022.

For more information, please visit:


AISG at CTE-STEM Conference 2021

Growing Singapore’s AI talent pool has always been one of the missions of AI Singapore, and supporting Students and Teachers in AI education is a great way of ensuring that. 

From the 2nd to 4th of June, the fifth APSCE International Conference on Computational Thinking and STEM Education(CTE-STEM) 2021 was held virtually by National Institute of Education, Nanyang Technological University. International speakers were present at this online conference to share about the various aspects of computational thinking in education. AI Singapore was also honoured to speak about Intergenerational Learning with AI for Kids (AI4K)®.

Held in tandem with the CTE-STEM 2021 Conference was the BuildingBloCS (BBCS) High School Computing Conference 2021, supported by AI Singapore. The BBCS High School Computing Conference is part of the Computing Education Outreach Programme organised by A-level and Integrated Programme (IP) Computing Students for all students interested in Computing and AI. The theme for this year’s BBCS High School Computing was Computation Thinking and AI Education, and the students had a series of workshops and talks, including a speech from AI Singapore.

AIWinpetition participants gathered for a group photo.

One of the highlights of the 3-day conference was the AIWinpetition, in which participants from various secondary schools, junior colleges and polytechnics took part. They had to work in groups to ideate and create a prototype of an AI application. The teams submitted an ideation write-up for each day of the competition and on the last day, a video pitch and prototypes were submitted. To encourage students of different skill levels to participate equally in this Winpetition, both code and no-code prototypes were considered during the judging process. A total of three Judges’ Choice Awards and one AISG Honourable Mentions Award were given out during this Winpetition.

Students using computer vision to create animation.

All in all, even without much coding experience, the AI Winpetition was very fun and manageable. I would highly recommend it to students who have an interest in computing, even for those who have little to no knowledge of programming, as there are a lot of introductory courses that will get you on your feet.

– Testimony from one of the participants

AI Singapore is proud to support Singaporean educators and students and will continue to innovate in terms of activities and initiatives to increase students’ and educators’ exposure to AI and to increase the AI talent pool in Singapore.

New AI Readiness Index Framework to Increase Awareness & Boost AI Adoption in Singapore

AISG has unveiled its AI Readiness Index (AIRI), a comprehensive and easy-to-implement industry-focused framework which enables organisations to self-assess the status of their artificial intelligence (AI) adoption readiness. Subsequently, they can leverage appropriate programmes to embark on a journey to improve their AI readiness.

AISG has enabled companies to build production-ready AI solutions over the past four years, with 37 in development and 29 solutions in deployment. Having engaged over 500 companies, AISG has distilled the critical success factors into developing the AIRI framework.

AIRI, a first-of-its-kind initiative in Singapore, will help organisations to assess their ability to build or buy AI products and take a recommended approach to their AI strategy, thus catapulting Singapore into a stronger AI-competent nation.

Professor Ho Teck Hua, Executive Chairman, AISG, said, “AI is going to be pervasive in all aspects of life in the near future. AISG has worked hard to develop this index by translating the learning points and outcomes from our engagements with hundreds of companies. The index is a critical and practical tool that will enable Singaporean businesses to benchmark their AI preparedness. It will also give companies a baseline they can use to further their adoption and use of AI.”

Dr Chng Zhenzhi, Director, National AI Office, Smart Nation and Digital Government Office, said: “AIRI is a useful tool for organisations to assess where they stand in their AI journey. AISG has been supporting Singapore’s National AI Strategy by growing a pipeline of local AI talent, bringing together researchers from Institutes of Higher Learning/Research Institutions, AISG’s AI engineers and companies to jointly develop innovative AI solutions, and helping companies adopt AI. AIRI will enable AISG to better tailor its suite of programmes to each organisation’s needs, and be more effective in helping organisations use and benefit from AI.”

First Step to AI Readiness Framework

With the introduction of this framework, interested organisations can take the AIRI Assessment via the website  The assessment can be completed within fifteen minutes and is recommended to be taken by a senior representative with sufficient knowledge of his/her organisation to be able to answer questions on:

  • Organisational Readiness
  • Business Value Readiness
  • Data Readiness
  • Infrastructure Readiness

Within these four pillars, nine key dimensions have been identified to help organisations plan for suitable approaches for successful AI adoption and scaling. 

Following the completion, there will be a system-generated report which shows the current AI readiness of the organisation, thereby allowing them to identify the gap between their current and desired states. Broadly, the report will categorise organisations into one of the following four categories: AI Unaware, AI Aware, AI Ready, and AI Competent.

AISG envisages that for organisations that are AI Unaware, the focus will be on increasing their AI literacy. For entities that are AI Aware, the goal would be to prepare them to adopt AI solutions. For organisations that are AI Ready, the aim is to help them to accelerate their AI adoption. Finally, AI Competent organisations will be encouraged to deepen their organisational AI capabilities.

These approaches will enable an organisation to ascertain the next steps, principally how it can move forward with a targeted approach to improve its organisational AI readiness. Organisations are also welcome to continue the discussion with AISG on mapping their next steps forward towards AI adoption.

AI Singapore will be able to provide support towards improving organisations’ AI readiness through these programmes:

– AI for Everyone (AI4E)®

– AI for Industry (AI4I)® and AI Certification

– AI Clinic and AI Discovery

– AI Advisory Projects

– 100 Experiments

– AI Apprenticeship Programme (AIAP)® or

– AI Data Apprenticeship Programme (AIDP)


Feedback from Organisations on the AIRI Assessment

As part of the pilot effort to test the framework ahead of the public launch, AISG worked alongside small and medium businesses as well as multinationals to conduct the AIRI assessment in their organisations. The early participants included IBM, RenalTeam Holdings and uParcel and their insights proved invaluable. 

Julian Tan, Senior Manager, Analytics Solutions, IBM, said: “Adopting AI will be essential for businesses to strive in this century. The AISG AIRI assessment is a holistic, systematic health check with a set of well-defined pillars and dimensions for businesses to benchmark and guide their AI adoption strategy. IBM takes pride in being a leading global hybrid cloud and AI provider. The results of the assessment have given us additional confidence that our early investment into AI skills and business transformation is on the right track to deliver tangible business value, as well as given us critical insights to steer our future directions.”

Chan Wai Chuen, Managing Director, RenalTeam Holdings, a dialysis treatment healthcare provider said: “Many business executives often share a common misconception that AI is only suitable for large companies with financial means and IT capabilities. The AI Readiness Index (AIRI) directly addresses these misconceptions. AIRI’s structured approach provides a practical step-by-step framework for businesses to assess their readiness for AI.  It shows that AI adoption is possible for all businesses, large or small. For business executives concerned about AI disrupting their industries, I am confident that the short investment of their time in the AIRI assessment will be one of the best investments they will ever make to future-proof their businesses.” 

William Ng, Chief Operations Officer, uParcel, a key on-demand 24/7 courier service in Singapore, said: “AIRI provided uParcel with a quick self-assessment of our organisation’s current AI readiness level. The assessment results allowed us to understand where we currently stand and how to get to where we want to be. As a result of AIRI, uParcel will enrol more software engineers into AISG’s AI for Industry (AI4I) programme to close the gaps identified by AIRI. Our previous engagement with AISG through an AI Advisory project has helped improve our delivery efficiency by 20% thus, we are looking to deepen our AI capabilities through the 100E programme with AISG.”

Industry Collaboration

As part of AISG’s effort to seed greater awareness of AIRI, it will collaborate with various industry partners and trade associations to get their customers and members to undergo the AIRI assessment. Some of the leading partners on board include Dell Technologies Singapore, Microsoft Singapore, Singapore Polytechnic and SGTech.

Integrating DVC Into AI Singapore’s Data Platform

(By Desiree Chen)

When working on projects, AI Engineers and AI Apprentices conduct multiple experiments with various hyperparameters being involved. There will be several versions of data and models to be tracked. Such tracking allows one team member to reproduce the experiment conducted by another team member and perhaps subsequently improve on the model. Data Version Control (DVC) provides a convenient way for this tracking to be done.

A typical AI project involves many versions of data and models.
(Image source : DVC homepage)

DVC is an open-source Version Control System for Machine Learning Projects. For more details, please refer to these articles previously published on this blog : Data Versioning for CD4ML – Part 1 and Part 2. In this article, I set out to share how DVC has been integrated into AI Singapore’s Data Platform.

Primer: GitLab

GitLab is the DevOps platform used by AI Engineers and AI Apprentices here at AI Singapore. Each project undertaken by AI Singapore is initialised to have its own data repository. This is done via bash scripts, run in a Docker container, that use GitLab API to create GitLab groups and GitLab projects, in our case the data repository, under each group. Relevant users are accorded access such as Owner, Maintainer, Developer, Reporter or Guest to the group. GitLab API is a REST API. Some of the HTTP methods used in managing the groups and projects are POST and GET.

To borrow the idea of the Cookiecutter approach, all data repositories have a data folder containing rawinterim and processed subfolders. In addition, the data repository has DVC initialised with the DVC remote configured to be our on-premise S3-based data store (Dell ECS).

This is how the data repository looks like initially.

$ tree
├── data
  ├── interim
  ├── processed
  └── raw

4 directories, 0 files

Usage of DVC with S3

Referring to the screenshot below, when data (in this case, it is resale-prices.csv) gets uploaded to the Data Platform and eventually stored in our S3, a git push is made in the data repository such that the corresponding ‘.dvc‘ file of the data comes under the raw folder.

As a suggested use case for the different subfolders under the data folder, the user uploads the raw data to the raw folder. The interim folder is then used to store data that has been cleaned, filtered or feature engineered. Finally, the processed folder contains the train and test data.

This is a sample of how the data folders may be structured.

$ tree
├── data
  ├── interim
  │ └── resale-prices-removed-duplicates.csv.dvc
  ├── processed
  │ ├── resale-prices-test.csv.dvc
  │ └── resale-prices-train.csv.dvc
  └── raw
    └── resale-prices.csv.dvc

4 directories, 4 files

The diagram below shows the basic workflow involving DVC.

Integrating DVC into the Data Platform is just the beginning. When rolling out this new feature, it was crucial to create a training video and provide demonstrations to showcase suggested ways that AI Engineers and AI Apprentices can interact with the Data Versioning feature in the Data Platform. With that, AI Singapore’s Data Engineering team could ensure greater use of DVC by the various project teams.

Future Work

Versioning of data is often thought of together with versioning of models. For AI practitioners, it is the ability to version both data and models that would allow for experiments to be reproducible. While Git tracks changes to source code, DVC tracks changes to data and model. 

With the integration of DVC into the Data Platform, all projects undertaken by AI Singapore have the option of data and model versioning. The standardised structure of the data repository provides better dataset management – whether raw, interim or processed – for AI Engineers and AI Apprentices. In my interactions with project teams and start-up teams at AI Singapore, the observation is that such teams will have an increasing need not just for data versioning, but for model versioning as well. 

At the time of writing this (March 2021), DVC has released Version 2.0 which contains more features such as those relating to Experiment Management. Ongoing work can be done to further the use of DVC in AI Singapore’s Data Platform, especially with this latest release. 

The Data Engineering Series

This article was written by Desiree Chen, graduate of the AI Apprenticeship Programme.
Read more about her experiences here.

Desiree appreciates that there is a place for arts and humanities in the domain of Artificial Intelligence. A creative person, she once took a sabbatical to pursue her love for music. She plays the piano and cello; the latter being an instrument which she took up in her adult years. If there was one language that she would like to improve on, it would be Norwegian; so that she can go beyond buying groceries, ordering food and once answering transport survey questions when travelling on the metro in Oslo. : ) She hails from the sunny island of Singapore. 🏖️

Supporting the TagUI Telegram Community with Hasky

TagUI is a free Robotic Process Automation (RPA) tool offered by AI Singapore. It also has a thriving international user community that builds applications with it across different domains. While the TagUI GitHub page is the go-to place for their work, users have also found value in joining the official Telegram chat group to get their questions answered.

A few weeks after its launch, it was soon realised that some of the answers to questions raised in the Telegram group could already be found among the GitHub Issues pages. With more than a thousand question-and-answer pairs already logged, this is a boon, although going through them is no mean feat. We decided to integrate an information retrieval system to better support users with timely automated suggested answers.

Enter Hasky.

Hasky (a play on the words husky and ask) is an information retrieval system that can be used to build chatbots, FAQs and any question-answering system. It can be used out-of-the-box and also comes with the possibility of improved performance when deployed with an additional data labelling step. In this case, Hasky was integrated with the TagUI Telegram chat under the default configuration.

When a member sends a question in a message, Hasky goes to work. It sniffs out a question when the text is between 5 and 32 words long and ends with a question mark.

Digging into the heap of issues logged in GitHub, Husky hunts for the parts that best answers the question. By employing self-supervision techniques, the need for data labelling is significantly reduced. Data labelling will definitely make Hasky much more accurate but it is not needed to be used at the initial stage.

With that, we have built a layer of automation for the benefit of users seeking to create automation workflows for themselves and others.

If you are or want to be a user of TagUI, you are encouraged to join the Telegram chat group here.

If you are intrigued by Hasky and would like to explore it for your own use, get in touch with us at under “AI Bricks”.

Related Stories

Deepfakes & Misinformation: The Ethics Behind AI

(This article was contributed by the SMU AISG Student Chapter)

Deepfakes are fake events, commonly the swapping of faces and/or manipulation of facial expressions, which largely resemble the real thing but are in fact artificially created by leveraging on deep learning, a form of artificial intelligence. To put it briefly, an artificial neural network is first fed thousands of images in order to train it to identify and reconstruct patterns such as faces. Upon completion of training, the neural network can then be used to match and swap both faces and expressions in videos and images. You would assume that such a confusing process would only work if programmed by an artificial intelligence expert. However, there is a wide suite of readily-available deepfake tools out on the internet for anyone to kickstart their own deepfake project with ease.

The Threat

The most worrying factor of deepfakes is the potential for spreading of misinformation. Anyone could simply create a deepfake of an important public figure or celebrity saying or doing something highly inappropriate. A popular example would be of Barack Obama making highly inappropriate comments in his “public service announcement” (as seen here). If it weren’t revealed to have been a deepfake, this video alone would have destroyed all of Obama’s reputation and credibility. While most deepfakes of politicians and celebrities currently on the internet serve to be amusing content, they also simultaneously reveal the horrible implications deepfakes can potentially have on society. If fallen into the wrong hands, deepfakes could swing elections, create tensions and even incite violence.

Source: BBC   Face manipulated using green screen

Deepfakes also pose a serious threat to the future if not mitigated or regulated in the digital world. Deepfake technology is getting sophisticated by the day and making it even harder to distinguish between a real and fake video (Toews, 2019). This can potentially harm businesses, stock markets, person’s reputation as well as cause conflicts between countries. To understand its future impact, let’s consider this scenario where a malicious person could post a video of the US president being racist against Asians, that circulates across the globe. Such a situation could lead to riots as well as mass defamation of the president that ultimately hurts an entire nation.

The fight with deepfakes going into the future will require constant reinvention as this technology is improving at a rate that surpasses AI experts. A prime example is when group of AI experts tried to detect whether a group of certain videos were deepfaked or not, they failed 40% of the time. Thus, technologies must be developed and people must be educated on this topic, so as to detect and prevent deepfakes from creating havoc.

Over the past few years, misinformation, in the form of fake news, images, and videos, has been proliferating across the internet. While most of us are aware of the implications of deepfake technology, we tend to dismiss them as inconsequential more often than not. However, a closer look at the spread of technology shows us that we have good reasons to be worried. The increased sophistication, ease-of-use, and “democratization of access” to deepfake-based mobile phone applications and software that enable common individuals to propagate misinformation is a concerning trend indeed (Tung, 2019). Moreover, the code to generate deepfakes and different implementations of the algorithm have been published as open-source on the internet which makes it particularly easy for anyone who has a basic knowledge of concepts like artificial intelligence, programming, and software development to manipulate.

As students and broadly, as members of society, we have a need to be concerned about the ethics of deepfakes. Is it really alright to turn a blind eye to the massive amount of misinformation that deepfakes enable? Do we not have a responsibility to speak up about the increasing cybercrime, including non-consensual pornography that has also been enabled due to it? According to Jaiman (2020), “creating a false narrative using deepfakes is dangerous and can cause harm, intentional and unintentional, to individuals and society at large. Those of us who engage in the creation of deepfakes, including big technology companies like Google, Microsoft etc who offer the capabilities to generate them arguably have a strong moral obligation to regulate the use of such media and ensure that they are implemented ethically.

Individuals, Companies and Governments

When it comes to combating the malicious use of deepfakes, there are three main actors involved: individuals, companies, and the government. The fight against the weaponization of this technology can be successful only if these three components cooperate. Individuals have the responsibility to educate themselves on concepts such as media literacy and improve critical and analytical thinking skills. Artificial intelligence and associated technologies, however powerful and beneficial, should neither be overhyped nor be seen as a panacea to societal problems. As mentioned previously, companies including social media platforms like Facebook, Snapchat, Instagram, TikTok etc also have ethical and social obligations to frame and document community standards and post guidelines that discourage malicious practices. Additionally, if content is found to be intentionally or unintentionally harmful to an individual or a group of individuals, then guidelines must be in place to take it down or limit the sharing. The government also bears responsibility to oversee the practices of both individuals and corporations.

In a nutshell, only a mix of technological, socio-political, and regulatory measures can effectively address the magnitude of ethical challenges posed by deepfakes and the AI behind the technology.

Written by:

Nandini Sangeetha Nair, Rohan Manoj Kuruvilla, Lim Zhi Hao
– SMUAI Subcommittee



The views expressed in this article belong to the SMUAI Subcommittee and may not represent those of AI Singapore.

mailing list sign up

Mailing List Sign Up C360