Get SGD58.85 off your premium account! Valid till 9 August 2021. Use the Code ‘SGLEARN2021’ upon checkout. Click Here

Delivering A Personalised Experience For Library Users

The user experience is very important to NUS Libraries. With eight libraries under its wing, a good experience is not just about making it convenient for users to search across multiple databases; it is also about delivering personalised recommendations to engage them on wider and more in-depth use of library resources.

The National University of Singapore (NUS) is ranked consistently as one of the world’s top universities, and offers the most extensive college degree courses in the country. The university’s libraries, collectively known as NUS Libraries, offer a wide range of resources to support teaching and research in the various schools and faculties, their graduate divisions as well as administrative units and research institutes.

In December 2019, NUS Libraries launched a unified search system called FindMore that enables library users to perform searches across multiple internal and external databases and consolidates the results for them.

The next step now is to customise the user experience by augmenting the search engine and email newsletter with a recommender system. “Personalised experience for users is very important to us,” said university librarian Lee Cheng Ean. “This will encourage users to read more by uncovering more diverse yet relevant items from the university’s rich database of resources.”

To deliver this, NUS Libraries has embarked on its first attempt in developing an AI recommender. This is being done in partnership with AI Singapore (AISG), under the AISG 100 Experiments (100E) programme. The aim is to use AI to engage library users better by analysing their profiles and reading histories, in order to recommend items that may be of high relevance and interest to them.

In line with these requirements, the project team has developed 2 AI models:

  • Search-based recommender: to be integrated into FindMore, this model learns from the natural language features of a search term, the titles and descriptions of items in the database, and the historical transactions between every user and item. Based on these, the model is able to process a query in real-time and return a list of alternative recommended titles to complement the default search results, thereby increasing the diversity and novelty available to users.
  • Email recommender: to be integrated into subscribers’ email newsletters, this model learns from the individual’s profile and reading history, and merges this with the titles and descriptions of items in the database, and the historical transactions between every user and item. Based on these, the model is able to make title recommendations that are personalised to every subscriber.

The effectiveness of the model was evaluated via user testing of the AI recommender, where the solution achieved a click-to-open rate that was 4x higher than the global benchmark* for the education and training industry.

The team has also delivered a pipeline to automatically consume incremental data and update the model. This enables the system to make recommendations to NUS Libraries’ email subscribers on a weekly basis, suggesting relevant items based on their profiles, preferences and transactional history.

The AI recommender has been implemented in NUS Libraries and is accessible via the FindMore portal:

“We are excited to deploy the recommender on a larger scale as beta, and look forward to the system returning results which will encourage users to read a more diverse set of publications, We also hope to gather feedback that will enable us to further improve the system and continue to elevate the user experience.”

Cheng Ean


With the kind permission of NUS Libraries, the source code and sample (anonymised) dataset for the search-based recommender model has also been licensed under Apache 2.0 and made freely available for the public to access:

If you would like to know more about the 100E programme, please visit

You can also join the AISG Community and 100E Community groups here: 

Growing the TagUI Community

TagUI RPA is a free and leading open-source RPA tool by AI Singapore.
It is perfect for automating ad-hoc workflows done at individual or department levels. Whether your company is new to Robotic Process Automation (RPA) or an experienced enterprise using commercial RPA tools, TagUI provides the lowest risk to entry and lowest TCO (total cost of ownership).

This is possible because TagUI is the only RPA tool in the market that is truly free, without any pay-walls or subscriptions to access its feature set and technical support. In this article, we’ll introduce the TagUI RPA Community Group as well as share updates from the last couple of months.

  1. AI Singapore Community: TagUI RPA Group.
  2. TagUI New Features.
  3. TagUI Technical Webinar.
  4. Community Contributions.
  5. Community Articles.

1. AI Singapore Community: TagUI RPA Group

With the launch of the AI Singapore Community social network, TagUI RPA users now have a new space to congregate. As one of over 30 groups (and growing) around AI/ML, this is the place for the discussion and diffusion of knowledge for AI practitioners. All groups and posts are publicly viewable and, with a free registration, you can engage with the community, create your own groups/forums, have access to numerous free courses around AI. Some good news – there will be new courses on TagUI in the coming months!

The existing daily TagUI conversations in Telegram group chat will continue, the weekly Zoom Q&A will continue, the periodic updates through LinkedIn will continue. This social space adds another support channel for TagUI users on their RPA journeys. The forums make it easy to organise topics and give users fine-grained control over which forum and topics to subscribe for email updates. For a start, we created the topics below, and we look forward to users creating their own topics.

Figure 1: The AI Singapore Community

In particular, the content for user workflows showcase is only available there, because TagUI doesn’t have any other existing channels which can present this in an organised way. This is the best place to share your RPA masterpiece with the world!

2. TagUI New Features

Take note of the following new features, Windows and Telegram users!

  • Microsoft Word Plug-in v3 released, create RPA robots directly in MS Word (Figure 2).
Figure 2: Microsoft Word plug-in
  • Microsoft Excel Plug-in v3 released, set up RPA data and analyse RPA reports (Figure 3).
Figure 3: Microsoft Excel plug-in
  • Added support for automating Microsoft Edge web browser (visibly or invisibly).
  • Send Telegram notifications from your RPA robot with “telegram id message”.

3. TagUI Technical Webinar

Catch the latest webinar by Ken Soh, creator of TagUI, below.

4. Community Contributions

An acknowledgement to the technical contributors in the community and their works.

  • Allan Zimmermann from Denmark 🇩🇰 created a C# .NET version of TagUI so a C# user who wishes to create RPA in C# programming language instead of TagUI human language can easily do so. Furthermore, he created a TagUI module for Node-RED, a popular workflow automation tool for creating event-driven automations. Also, he created a TagUI Docker image that any Docker users can use to easily spin up an instance of TagUI in human language or Python.
  • Subhas Malik from India 🇮🇳 created VS Code TagUI language extension. This plug-in lets VS Code IDE users enjoy syntax highlighting, Intellisense auto-complete and more.
  • Suhail Ahmed from United Arab Emirates 🇦🇪 has started working on a Go version of TagUI, for users of the rising programming language from Google.
  • Ken Soh from Singapore 🇸🇬 made numerous enhancements for RPA for Python, the leading Python RPA package he created and maintains in his personal time. The Python package lets Python users enjoy the benefits of TagUI by making it easy to create RPA robots in Python programming language.
5. Community Articles
We also found some interesting content on TagUI coming from different corners of the world. It helps if you understand Korean or Spanish!

As always, check out the TagUI homepage for the latest info!


A Simple 3-Step Framework for Machine Learning Workflows

It’s one thing to build a platform tool that integrates Machine Learning workflows, but it’s another to build it for non-tech run-of-the-mill, people-like-you-and-me employees to navigate and use. Here at AI Singapore (AISG), we want to embrace an inclusive design for Machine Learning Platforms such that any business or startup can not only use AI products, but create their own AI-operated products (e.g. creating a chatbot).

Over the course of several AI Industry led projects, our lean design team at AISG have come up with a simple 3-step breakdown of the average ML process/workflow. Note that this is meant to be a flexible, malleable, adaptable framework, with the goal being to try and simplify what seems to be an unnecessarily convoluted series of steps, into the following:

  1. Onboarding
  2. Model Training
  3. Results & Evaluation

Be it Natural Language Processing (NLP) or Computer Vision, this 3-step flow can be applied to most branches of Machine Learning. 


1. Onboarding

The first step of every ML-workflow is onboarding, which gives the user a debut of how the platform looks visually, as well as what’s expected of[from] them to kick things off. In most cases, that would be introducing the task of uploading a required dataset, file, image, or video, to get started.


Formatting the Dataset

Since a set of training data can consist of anything from a collection of text, to images, videos, and the like, it is important to communicate clearly to the user what data the model needs, and what specific format it should be in.

The user needs to understand how to prepare and organize the data for upload. You can use a checklist to simplify the requirements for the user:


Dataset formatting checklist for Natural Language Processing

  • .csv format

  • 2 columns of data only

  • first row contains headers (e.g. “Title”)

  • subsequent rows (second and onwards) contain cells filled with different “groups”* of knowledge data respectively

    *If your knowledge data is not yet grouped/categorize, please refer to this document on how to do so

Other Onboarding tips:

  • Are there any other inputs needed from the user?
  • Are there any other decisions the user needs to make?


2. Model Training

Once the required dataset has been uploaded, and the conditions are set for the model to begin running, the user needs to have a basic understanding of how to fine-tune the model once it starts the training.

It is extremely important that users know how to evaluate the output to assess whether it is addressing the target user’s needs.


  • Does the chatbot model answer the question with the right data?
  • Does the CV model match the right face to the right person?


3. Results & Evaluation & Deployment

Once the user has made their own adjustments to try and improve the model, it is important to give them relevant statistics so they can make an informed decision to temporarily stop training the model by determining the quality of results. In most cases, a Confidence Level is used.

Please explain clearly and succinctly to the user what that is and how to interpret it.

If you’re intending to display other forms of data visualizations like pie charts and matrices, ensure there is sufficient explanation and visual guides on how to interpret such different forms of data.



Once you’ve familiarized yourself with this 3-step process, bear in mind that it is actually a loop with probably more actions in between each step, that looks something like this:

Using AI With Confidence

Ensuring robustness in AI systems against malicious attacks garners greater attention

Over the past few years, Artificial Intelligence (AI) has exploded in capability and is becoming increasingly common in our everyday lives. From Tesla’s Autopilot[1] to wider adoption of voice assistance found on smartphones to diagnostics assistance for medical workers in medical imaging and more recently, in the detection of Covid-19[2].

However, not all adoptions went without unintended consequences. For example, a 2018 collision involving a Tesla killed a driver; Amazon’s AI recruitment system discriminated women in the job application process[3]; or an AI cancer diagnosis exposed private information on whose data was used.

These accidents are all vivid reminders that, with more AI models deployed in systems impacting human life, it will be important to manage the unintended consequences that could become harmful. Otherwise, it will adversely impact the society’s trust towards AI and its applications, which will subsequently hinder the full realization of its benefit.

The benefits that AI brings is plenty and tremendously huge. According to a report by the McKinsey Global Institute published in September 2018[4], AI has the potential to incrementally add 16 percent or around $13 trillion to the global economic output by 2030, amounting to an average contribution to productivity growth of about 1.2 percent annually.

It is therefore critical to enable a growth-driven yet safeguarded adoption commonly referred to as Trustworthy AI

Although there is no formal definition of Trustworthy AI, it is well accepted that a few qualities need to be taken into consideration when designing an AI system, including explainability, fairness, robustness, etc. Each of them would warrant a separate discussion. This article will focus on robustness, with future articles outlining the other principles to follow. In the following, we will elaborate on what is robustness in AI and how we can manage it, such that accidents are less likely to happen and adoption can grow based on trust in AI.

Figure 1: The mangled Tesla car on the highway after crashing into a highway barricade. The driver was killed in this unfortunate event.

What is AI?

An AI system, to a large extent, is a software system but the key difference lies in the way it is built. With AI (specifically machine learning), we no longer write code to tell the system explicitly what to do. Instead, we “teach” the system by providing it with examples[5].

This process is called training and the examples that are provided to the AI system constitute the training data

Through the training process, the system develops a model which is capable of completing a task, for example, the detection of road signs (for autonomous vehicles). The training process needs information on how to utilize the data, what type of model (e.g., decision tree or neural network) to use, etc. This information is usually codified in the training algorithm. 

The model’s effectiveness and accuracy are then evaluated using a separate set of examples called the testing data.

When the model achieves the required level of performance, it is deployed into production together with a set of logic that enables it to interact with other system components or the external world. 

An AI system thus comprises three main components: data, model and code.

Robustness against potential attacks

Traditional software systems are often secured by measures such as establishing a security perimeter to prevent intruders from gaining access to the system and writing secure code to prevent exploits such as SQL injection. With AI systems, however, there are two additional dimensions that expand the attack surface. 

The first has to do with the data that is used to train the AI model. The volume and quality of the data are key to the effectiveness and accuracy of the AI model. This data is usually collected from sources that lie outside the security perimeter of the system, and could potentially expose the system to a new suite of attack vectors. 

For example, attackers could inject bad data into the training data and cause the AI to extract wrong patterns from the manipulated data. According to a recent study by MIT[6], the 10 most cited AI datasets are loaded with label errors. For example, ImageNet: the canonical image recognition dataset, has an estimated label error rate of 5.8%. The researchers also looked at the 34 models whose performance have previously been measured against the ImageNet test set. They re-evaluated each model against 1,500 examples where the labels were found to be wrong. It was found that the models, which did not perform well previously on the original incorrect labels, are now performing much better after the labels were corrected. This essentially suggests that the label errors have a negative impact on the model performance.

Now, imagine in the case of the road sign detection in autonomous vehicles, the attacker could deliberately inject a large percentage of stop signs that are labelled wrongly as speed limit. If this data poisoning is undetected and the model extracts the wrong pattern, the AI model in the autonomous vehicle would potentially misclassify the stop sign as a speed limit sign which could result in life threatening consequences.

Another dimension is the AI model itself. When training an AI model, the focus is usually on its generalization capability, i.e., how well it performs with unseen data. Using the road sign example, the model should be able to detect road signs on the road even if they may look different from those in the training data, e.g., discoloration of the sign due to the sun or the growing of moss and vines on the signboard. 

To achieve this, the model is trained on one set of training data and then evaluated on a set of testing data which it had not been exposed to during the training phase. The testing data is usually derived from the same distribution as the training data, so that the evaluation could provide an unbiased estimation of the AI model’s performance. A fundamental assumption here is that all future unseen data will be from a similar distribution as the training data (in-distribution generalization).

However, in the real world, another type of unseen data is more common – unseen data that is statistically different from the training data. This type of unseen data can undermine the robustness of AI models and cause them to be brittle. 

The inability to handle this type of unseen data can have serious implications, such as the Tesla autopilot system failure mentioned earlier. This showed a real-world example of the impact of unseen data. 

There are attacks targeting to exploit these vulnerabilities, which are called evasion attacks. An evasion attack happens when the AI model is given a carefully perturbed input that is undifferentiable from its original copy to a human but completely confuses the AI model (See Figure 2). For example, an attacker could attempt an evasion attack by erecting a road sign designed to mislead the AI model, e.g., a sign that is interpreted as a higher speed limit than allowed on a particular road[7]. It is also reported that a similar evasion attack can be launched by flashing a beam of laser on the road sign[8], causing the AI model to misclassify trolleybus as amphibian and parking sign as soap dispenser.

Figure 2: An example of adversarial images with vastly different predictions. Something as simple as a row of modified pixels can cause AI models to fail while a human being would still be able to discern the original object in the picture.

To enhance the robustness of AI models and ensure that they can be deployed safely, the models will need to be trained to go beyond in-distribution generalization and work better with unseen data that is statistically different from the training data (out-of-distribution generalization) as well.

The key to Robust AI

Three key ingredients, people, process, and technology will play a big role in making AI robust. 

First, we will need to train people with good knowledge of how AI works and where it can go wrong. 

AI Singapore has been playing a significant role in this aspect with its flagship talent programme, the AI Apprenticeship Programme (AIAP)®. AIAP sets out to train AI professionals for the industry using a combination of deep-skilling training and hands on experience with a real-world project, all with an emphasis on self-directed learning[9].

After completing the AIAP® programme, the apprentices will be well equipped with the necessary skill set to tackle the challenges of developing and deploying AI systems in the industry. 

AI Singapore also has other programmes for less technical audiences like AI for Everyone (AI4E)®, AI for Kids (AI4K)®, AI for Students (AI4S)®, and AI for Industry (AI4I)® to prepare the future generations of AI talents[10].

In addition, AI Singapore is working together with the national cybersecurity community to equip cybersecurity professionals with new AI skillsets. To that end, AI Singapore has signed a collaboration MOU with the International Information System Security Certification Consortium (ISC)2 Singapore Chapter[11] to augment cybersecurity professionals with AI proficiency.

Second, as many organizations are starting to move AI use cases from pilot to production, there are more concerns about the unintended consequence of AI systems. There are emerging attempts to regulate AI systems, e.g., EU’s proposed AI regulation[12].

Nevertheless, an operationalizable process guideline is still missing to guide organizations in developing, verifying, deploying, and monitoring AI systems. To this end, AI Singapore is working with a number of partners from both the public and private sector to define a technical reference which focuses on aspects like robustness and resilience, transparency, explainability, data protection in AI systems, and how to evaluate these qualities. 

We are also putting great effort in making those process guidelines operationalizable by incorporating them into our MLOps pipeline. Last but not least, technology-wise, we will also need to build new tools to make the best practices and standards we are setting up more operationalizable. For example, AI Singapore is currently working with its collaborator from the NTU Cybersecurity Lab to develop a tool which is able to evaluate a trained AI model’s out-of-distribution generalization. Every time a new model is trained a report will be generated prior to deployment which outlines the model’s ability to perform to new and unseen data in order to estimate its readiness for the real world.


Ensuring the robustness of AI systems is trickier than securing traditional software due to the additional aspects of data and model training. These vulnerabilities can lead to new attacks such as data poisoning and model evasion[13]. With more AI models being deployed in systems where there is significant impact to human life, these attacks could have wider security and safety implications that have to be addressed with the right combination of people, process and technology. In this article, we have presented a quick overview of what AI Robustness is and what AI Singapore is working on. In the upcoming articles of this series, we will delve deeper into the details of our endeavours regarding the related principles of trustworthy AI.

Stay tuned! 


  1. Tesla Autopilot:
  5. Software 2.0:
  6. Pervasive Label Errors in Test Sets Destabilize Machine Learning Benchmarks: Accessed on 28 April 2021
  7. DARTS: Deceiving Autonomous Cars with Toxic Signs: Accessed on 28 May 2021.
  8. Adversarial Laser Beam: Effective Physical-World Attack to DNNs in a Blink: Accessed on 17 Jun 2021.
  13. These are just two of the many possible attacks to AI systems. NIST has published a taxonomy and terminology of possible attacks in

AISG Launches “Prize Challenge” to Curate Ideas and AI Models to Combat Fake Media

The “Trusted Media Challenge” is a five-month long competition aimed at attracting the AI community to design and test out AI models and solutions that will easily detect audiovisual fake media, where both video and audio modalities may be modified. The initiative – targeted at AI enthusiasts and researchers from around the world – aims to also strengthen Singapore’s position as a global AI hub by incentivising involvement of international contributors, and sourcing innovation ideas globally.

Participants in this Challenge will have access to datasets of original and fake media videos with audio. The Challenge is conducted in partnership with Mediacorp’s CNA and Singapore Press Holdings’ The Straits Times, who have provided about 800 real video clips including news and interviews. In addition, custom videos were collected from consented actors. In total, there are approximately 4,000 real clips and 8,000 fake video clips for the participants to train and test their models on.

The Challenge is open to researchers and industry professionals from around the globe, or anyone who is interested and/or experienced in machine learning, deep learning, computer vision, especially in media forensics. Participants need to build AI models that estimate the probability that any given video is fake.

From today until 15 December 2021, participating teams can submit their solutions – codes and models – via the Challenge Platform provided by AI Singapore. This platform will automatically rank the submissions on the leaderboard.

The winner that emerges from this Challenge stands to earn prize money of S$100,000 and a start-up grant of S$300,000 to develop their solutions further, using Singapore as the development base. Prizes and start-up grants will be awarded to the top three winners. The total prize money amounts to S$700,000 (about US$500,000).

Fake media technology or deepfake tech is becoming mainstream, delivering benefits and yet posing a variety of threats. The technology has allowed movie producers to manage videos and dialogues without expensive reshoots, facilitated professional training, or has been used to protect the identities of those being persecuted, among other applications. At the other end of the spectrum, deepfakes are used to sow mistrust and seed scamming, making them an existential threat to societies today. If left unchecked, fake media risks becoming a serious national security concern.


Trusted Media Challenge Timelines
The Trusted Media Challenge opens on 15 July 2021. Interested participants can obtain full details and training data via the Challenge Platform

The Challenge is divided into 2 phases: Phase 1 will last 4 months. The top teams from Phase 1 will enter Phase 2. The best submissions will be counted and shown on the leader board; prize money will be given out based on the ranking in Phase 2.

The announcement of the top three winners is expected to take place in January 2022.

The AI Singapore Community is Launched!

We are pleased to make the above announcement!

Bringing you social networking 👍

Since AI Singapore was established in 2017, we have engaged with various organisations, communities and individuals in Singapore’s artificial intelligence space. From the early days on, we have had a community forum, especially popular with our LearnAI members for discussing course material. It has now been superseded by the AI Singapore Community social network with more powerful features.

Join the Community! 🏃🏽‍♀️🏃🏃🏿‍♂️

As you read this article, look at the header of the page. If you do not already have one, register for a LearnAI account (1️⃣). Choose the free LearnAI and Community Membership plan.

Once you have registered, proceed to login (2️⃣). After that, you are ready to enter the AI Singapore Community (3️⃣) !

If you are familiar with Facebook, you will be perfectly at home here.

You can see the activities in the Community in the form of a continuous newsfeed.

Interest Groups 👥

The Community is made up of spaces called Groups. Each group caters to a specific interest in the world of AI/Machine Learning/Data Science. Whether you are interested in the technical deployment of an AI model or the business transformative aspects of AI, join a group that interests you. If you feel really passionate about a topic that no one else is talking about yet, you can start your own and attract like-minded folks! Be sure to follow the guidelines when doing so.

Of course, we have not forgotten our LearnAI members who have found the original forum so useful. We have set up dedicated groups you can join to make your AI journey more fruitful.

Discussion Forums 💬

Forums are found within groups to provide a structured environment for fruitful discussions to take place. We hope that as you discuss, learnings take place and everyone wins. Do observe the Code of Conduct put in place.

Social Networking 🙋

What is social networking without a personal page? Here, you can tell the Community more about yourself.

Found someone you think will be interested in a collaboration? Drop him/her a private message! Or make a call for collaborators in the Seeking Collaborators group.

Got business problem or solution? 🤝

We provide spaces for people to come together in the AI Solution Seekers and AI Solution Providers groups. Spell out your business problem and look for offers. Or go out and show what cool stuff you can deliver.

Where’s the party? 📣

If you are organising an event for the AI/ML/Data Science community, you will want to broadcast the details in the Events group. We look forward to seeing a vibrant calendar of happenings in Singapore’s AI scene.

And much more …

As the Singapore AI ecosystem strengthens and matures, we hope the AI Singapore Community social network serves the needs of those interested in AI, regardless of their specialty or the stage in their journey. All are welcome under one roof. Come right in and see!

Egypt Adopts AI Singapore’s AIAP and AI4E to Upskill Nation

AISG and Egypt’s Ministry of Communications and Information Technology (MCIT) have signed signed a cooperation agreement to implement in Egypt two of AISG’s premier talent development programmes in AI, the AI Apprenticeship Programme (AIAP)® and AI for Everyone (AI4E)®. This collaboration will help Egypt in its strategy to launch capacity-building programmes to propel the country’s AI capabilities.

This move comes shortly after Egypt announced last week the launch of its national AI platform to embrace and embed AI technologies, information and network globally.

The virtual ceremony which took place on 12 July 2021 was witnessed by Singapore’s Minister for Communications and Information and Minister in Charge of Smart Nation and Cybersecurity, Mrs Josephine Teo and Egypt’s Minister of Communications and Information Technology, H.E. Dr Amr S. Talaat. The signatories to the agreement were AI Singapore’s Director of AI Innovation, Mr Laurence Liew and Advisor to MCIT’s Minister for AI Dr Golestan Radwan. In attendance were also the Singapore Ambassador to Egypt, Mr Dominic Goh and Egyptian Ambassador to Singapore, H.E. Mahmoud El Maghraby.

The first thrust of the collaboration will have AISG conduct a one-time training for a newly formed 10-member core Egyptian team on how to implement and run AISG’s award-winning AIAP. This training is expected to run for up to four months. Subsequently, the trained Egyptian experts will then cascade their knowledge by training their counterparts. The AIAP is a full time 9-month deep skilling programme that trains and grooms AI talents to work on real-world AI projects and enhances career opportunities in AI-related roles.

AIAP was launched by  AISG in early 2018  out of a need to groom local AI talent to research, develop and deploy AI solutions for Singapore’s industry, under AISG’s flagship 100 Experiments Programme (100E). 100E helps local companies solve their business problems through the co-development of AI solutions and by helping companies build their own AI teams. Since then, AIAP has been recognised by the industry as a successful innovation and talent accelerator programme, with over 150 AI engineers trained and hired in the industry to date.

In recognition of AIAP’s innovation in talent development, IDC awarded AISG as one of the country winners in the 2019 “Talent Accelerator” category. The success of the AIAP model can be adopted by any organisation through the AIAP eXternal collaboration model (AIAP-X). They can partner AISG to build their own AI talent, leveraging AIAP best practices, content, know-how and engage AISG as advisors, similar to the collaboration with Egypt. 

A second thrust of the collaboration will involve Egypt translating AI4E to the standard Arabic national language for a nationwide promotion. Started in 2018, AI4E is an effort to demystify AI for the public and companies. This 3-hour online course has also been adopted by Singapore’s secondary schools, junior colleges, polytechnics and institutes of technical education to groom students in AI literacy. To date, more than 41,000 people in Singapore have completed AI4E and the nation is enroute to reach its target of 100,000.

“AI Singapore is honoured and excited to be part of Egypt's strategy to launch capacity building programmes for its country. We are pleased that Egypt has selected AISG’s AI Apprenticeship Programme and AI for Everyone to accelerate its own AI talents. This is a testament that AISG’s talent development models are effective, can be scaled and adopted globally. We hope that with the introduction of these two programmes, Egypt will realise its ambition to be an AI-Aware nation.” said Laurence Liew, Director of AI Innovation, AI Singapore.

Laurence Liew

"We believe the Singaporean experience in AI is an impressive one that deserves to be shared with the world. The AIAP is quite unique and we believe it is vital as a next step after students graduate from university or other training programs. The gap between what students learn in courses and what is expected of them in real-life AI projects is quite large, and a person can only be considered ready for the job market if they have at least had one experience of working on an end-to-end project within a professional team, solving a real problem."

Dr Golestan Radwan

Like Singapore, Egypt has embarked on a more digitalised future with an ambitious AI strategy over the past few years. This collaboration between AISG and Egypt will continue to promote international cooperation in areas related to AI and capacity building in digital skills. The collaboration will also see new opportunities for AI companies in both countries. AISG will partner with Singapore’s agencies and trade associations to bring 100E AI companies and others into Egypt and the region.

UX Design for Machine Learning & Artificial Intelligence

Try to recall the first time you came across the phrase, “UI/UX Design” or User Interface/ Experience Design.

You might’ve asked:
What is User Interface?
What is User Experience?
And what exactly is the difference between the two?

On top of these questions, a third element is thrown into the mix — UI/UX Design for Artificial Intelligence and Machine Learning technology. Now, designers face the task of designing user interfaces and experiences for an upcoming array of products and workflows that incorporate the use of Machine Learning techniques.

Essentially, the problem at hand becomes:

How should we design user interfaces and experiences for people to understand and use AI products?


Here at AI Singapore (AISG), one of our aims is to create useful machine learning softwares and products for industries, companies, and everyday users. We want to harness the power of AI to make people’s lives easier.

Conversely, our UX team at AISG, has one major goal: to create and design useful, easy-to-understand ML experiences that facilitate a seamless and joyful experience for users.

In our upcoming series of AI4UX articles, we’ll explore the aforementioned questions, introduce design frameworks, and present real-life case studies, all in an attempt to share and compile the design work we do here at AI Singapore (AISG).

Whether you’re a fellow UX Designer, AI Engineer, or just a curious reader, there’s bound to be something interesting for you to learn from these resources.

Stay tuned!


Carving Out a New Career Path with AIAP

Resigning from a stable job in a reputable MNC and jumping into the unknown is always a risky proposition. But it was a chance that Mohamed Hanifa was willing to take, for the opportunity to immerse himself in the field of machine learning (ML).

A computer engineering graduate from Nanyang Technological University, Hanifa was working as a marketing data scientist with Palo Alto Networks for a year when the AI Apprenticeship Programme (AIAP) beckoned.


At Palo Alto Networks, he was responsible for campaign targeting, generating business insights, and gathering requirements from fellow field marketeers to build various data products.

Before that, while working as a data engineer/analyst at LynxAnalytics, he was responsible for deploying various ML systems as well as data pipelines.

However, the models he worked with were mostly developed by his colleagues. “I was not able to understand how they worked under the hood,” said Hanifa.

This nagged at him. Besides having a natural curiosity about data models, he also wanted to learn more about them so that he could troubleshoot whenever there were issues with the inferencing engine or model deployment.

He tried to read up what he could on the topic, diving into books like “An Introduction to Statistical Learning” and completing a Coursera programme on data science. But, he felt, “theoretical knowledge won’t amount to much without practical real-world experience. I felt a full-time programme would be the best way to completely immerse myself in the ML field.”

When Hanifa heard about AIAP and the opportunity it offered for apprentices to design and deploy ML models in a commercial setting, he decided to go for it, quitting his job at Palo Alto Networks to join the full-time programme.

Over the next nine months, he sharpened his technical skills, especially in coding deep-learning frameworks using Python. His engineering background and earlier self-learning efforts helped in this respect.

“Having some technical knowledge like understanding basic programming constructs and AI in general are important skillsets to have even before you start on AIAP. It provides the foundation for building up your AI/ML skills.”


During the apprenticeship programme, he also had the opportunity to work on a natural language processing project, building a model to answer common questions raised by an IT ticketing system. The aim was to free up the IT team to focus their attention on more complex or unusual problems.

His work on the project enabled him to develop soft skills as well, such as the importance of managing clients’ expectations and guiding them wherever necessary to achieve the best outcome.

Before he graduated from AIAP in August 2019, Hanifa secured a job as data science consultant with ST Engineering’s newly-created Analytics Business Unit, thanks to an opportunity that came up during a career workshop organised by AI Singapore.

In this new role, Hanifa was involved in building models for various solutions. The Python coding skills which he honed during AIAP helped him substantially in this aspect of the job, he said.

With the AI knowledge and skills developed through his apprenticeship, he was also able to contribute to various other initiatives within the organisation where AI expertise was required.

“Even though our projects have an AI component, our solutions are developed from end-to-end – from infrastructure to model deployment,” explained Hanifa. “Therefore, we not only have to choose the best performing model but also the one which can be deployed with ease.”

“These experiences have given me a wider exposure which would not otherwise be possible,” said Hanifa, who was recently promoted to assistant data science manager and is looking forward to doing more AI solutioning or consultancy work in the near future. “AIAP definitely played a major part in helping to carve out this new career path,” he said.

AISG Partners with Google Cloud to Advance Singapore’s AI ecosystem

AISG has partnered with Google Cloud to advance the adoption of AI and Machine Learning (ML) across industries in Singapore. This collaboration will have AI Singapore’s award-winning AI Apprenticeship Programme (AIAP)®, 100 Experiments (100E) and Makerspace programmes leverage Google Cloud AI and Machine Learning to solve real-world business problems with AI.

“Our partnership with Google Cloud will see both teams working on interesting industry projects, talent development and AI/ML best practices,” said Laurence Liew, Director of AI Innovation for AISG. “This collaboration will allow our teams to tap on Google Cloud’s engineering team and resources to improve our talent programmes, gain access to best-in-class AI tools, and AI/ML best practices.”

Companies will have the opportunity to approach AI Singapore to have engineers work to develop minimal viable products using Google Cloud’s technology under the 100E and Makerspace programmes. To date, AISG has helped to deploy more than 30 AI projects for the industry and built AI tools and solutions such as the world’s most popular open source RPA tool – TagUI – which has more than 70,000 downloads and 4000 stars on Github. and Genesis Gym in Singapore are two examples of organizations who are working together with AISG to implement AI-based solutions with Google Cloud tools: looked to 100E for a solution to develop an AI-based asset inspection solution for industrial applications to conduct first-level defect detections in certain environments and contexts, such as on offshore oil and gas platforms. Google Cloud tools including Firebase, Cloud Run, and Cloud Storage enabled to accelerate the development of this AI engine, and they are expecting the AI engine to be implemented by end-2021.

“AI Singapore and Google Cloud have been valuable partners in our development of an AI-powered asset inspection solution. The team’s innovative thinking, willingness to try new methods, out-of-the-box thinking and quick deployment have been a great help in advancing our goal to build an autonomous AI engine that can streamline our detection processes and reduce manual manpower and improve the accuracy and efficiency of inspection and reporting. With full deployment targeted for the end of this year, we are looking forward to seeing where this solution will take us, and are hopeful that together with AISG and Google Cloud, we are on the right track to be the gamechangers in the Energy and Renewables industry.“
Vinod Govindan
Founder and Managing Director,

Facing rising demand for customised online fitness services, Genesis Gym turned to AISG and GCP to scale up their AI capabilities and computing resources to cater to online gym-goers. Genesis Gym is working to utilise Google Cloud’s AI tools for computer vision and pose estimation, to assist users in their movement errors and provide coaching tips catered
to their specific needs.

“Providing quick and effective feedback to our clients is at the core of what we do at Genesis Gym. Following the rise of online classes, this process became increasingly challenging, which encouraged us to connect with AI Singapore’s 100E projects to find a solution. Together with AI Singapore and Google Cloud, we are working to create an easily scalable AI model that would allow us to continue in our mission in providing personalised services to our clients online, and are hopeful that we will be able to develop an automated process that adds value to their customised fitness experiences.”
Jonathan Wong, Company Director
Genesis Gym

Paul Wilson, Managing Director for Public Sector, Google Cloud Asia Pacific, said, “AI and machine learning are the next frontier of the digital age, and cloud technologies hold the keys to data-driven solutions that are in demand today. The partnership with AI Singapore sees us working jointly to
spearhead new applications of Cloud AI to fundamentally change business models and advance innovation in Singapore. In doing so, we hope to play a role in sustaining the nation’s national competitiveness, and transforming Singapore into a global hub for AI solutions.”


The next intake for the AI Apprenticeship Programme starts in September 2021. 

mailing list sign up

Mailing List Sign Up C360