Persevering through the disruptions caused by the COVID-19 crisis throughout their apprenticeship, eighteen apprentices completed their journey with the AI Apprenticeship Programme on 19 June 2020 and will begin their new careers as full-fledged AI Engineers this week.
Graduating in the middle of a pandemic had been an interesting experience for the batch. Half of their AIAP programme was done virtually the last 4 months and last Friday, we conducted their final pitch workshop and graduation ceremony virtually as well. They also had to undergo multiple job interviews virtually and we are glad many have landed an AI role even before the programme ended. Close to 80% of the apprentices have found full time roles before their graduation, with the remaining in various stages of the recruitment process with potential employers – this bears testament to the strength, rigour and recognition of the programme.
100E and AIAP Success! 2 of our apprentices have opted to join their 100E project sponsor, Q&M Dental Group, to continue to build out the solution they were developing during their AIAP. This is one of the outcomes we would like to see more of and will be focusing to make more of such “marriages” happen where well-trained Singaporean AI Engineers join their Singapore-based project sponsors to build out their AI capabilities.
Lending support at the graduation ceremony was Mr. Ryan San, CEO of EM2.Ai (part of Q&M Dental Group) who was very appreciative and pleased with the 100E/AIAP partnership.
“With this collaboration with AISG, we now have our first AI product that is viable and can be a foundation for our future development through AI.”
Ryan San, CEO of EM2.Ai
AI Singapore helps Singapore to re-start safely To support the nation’s fight against COVID-19, one of AI Singapore’s Makerspace Bricks was re-purposed to support social distancing. An AI Engineering team consisting of AI engineers and Batch 4’s apprentices were quick to pivot their attention and resources to address the needs of the global health situation, developing a novel social distancing application, FinePose, to encourage individuals to maintain a safe, physical distance from each other. HP Inc., Singapore is currently deploying the solution to safely restart their factory operations in the region.
Preparing our apprentices for the workforce In an effort to ensure that our graduating apprentices are well positioned with the necessary career skills to re-enter the workforce, a dedicated Talent and Career Management function was set up to provide job leads, career advisory and career skills workshops. Since its inception in February this year, the apprentices have had the opportunity to attend (i) a webinar on preparing for technical interviews, (ii) a workshop on writing an effective resume and marketing themselves on LinkedIn and (iii) one-on-one career advisory sessions.
Perspectives from our AIAP graduates
In her graduation speech, Michelle Loke shared: “This COVID crisis was unprecedented and some could say cast a shadow on our apprenticeship as it disrupted our learning and reduced our interactions. However, this pandemic is also accelerating the digital revolution. It is illustrating the importance of embracing technological disruptions, of being resilient and the need to constantly adapt to change. The creative thinking and AI skills that we have learnt on this journey have more than prepared us for this new normal.”
A former R&D Director, Sun Yikang, had this to say: “AIAP provided me more than a stipend…. All of these combined together created a huge advantage to help me with the transition. It saved me at least 2 years to transit myself into the AI industry. Now I can tell you exactly what I feel about AIAP — I feel very fortunate and really appreciate the programme.”
Click HERE to read the full transcript of their speeches.
Once again, congratulations to Batch 4 for being resilient throughout the 9-month AIAP and bearing with the disruptions brought about by COVID-19. We would like to thank them for being patient as we jointly discover the best ways to deliver AIAP fully online.
We wish our graduates all the best in their journeys ahead.
Abstract Traditional cloud storage has relied almost exclusively on large storage providers, who act as trusted third parties to transfer and store data. This model poses a number of issues including data availability, high operational cost, and data security. In this paper, we introduce a system that leverages blockchain technology to provide a secure distributed data storage with keyword search service. The system allows the client to upload their data in encrypted form, distributes the data content to cloud nodes and ensures data availability using cryptographic techniques. It also provides the data owner a capability to grant permission for others to search on her data. Finally, the system supports private keyword search over the encrypted dataset
Authors Hoang Giang Do, School Of Computer Science and Engineering, Nanyang Technological University Wee Keong Ng, School Of Computer Science and Engineering, Nanyang Technological University
Below are 4 interesting papers from Lee Hwee Kuan’s team over at A*STAR BII on cancer detection using advanced analytics/AI techniques.
1) Nuclear Pleomorphism in Renal Clear Cell Cancer The characteristics of the nuclei are often observed by pathologists when they assess the progression and presence of cancer cells in tissue biopsies. Cancerous tissue typically contains cells with enlarged, irregularly-shaped (pleomorphic) and darkly-stained (hyperchromasia) nuclei with prominent nucleoli. However, at different stages of the disease, the nuclear structure and prominence of nucleoli can change. The Fuhrman grading system for clear cell Renal Cell Carcinoma (ccRCC) was developed around these observed changes in the nuclei. It provides rules to classify the different stages of disease progression. Early stage ccRCC tumors typically have small, round nuclei with inconspicuous nucleoli, while late stage tumors have enlarged and irregularly-shaped nuclei with prominent nucleoli. Following on from our work on nucleoli detection, we have developed new machine learning methodologies to perform automatic grading of ccRCC histopathological images. From the histopathological images, we extract features describing the properties of multiple nuclei concurrently. This enables us to train classifiers that can distinguish the level of pleomorphism of the nuclei in the tissue sample, resulting in a higher accuracy in the automated grading. Accepted for publication as: Daniel Aitor Holdbrook, Malay Singh, Yukti Choudhury, Emarene Mationg Kalaw, Valerie Koh, Hui Shan Tan, Ravindran Kanesvaran, Puay Hoon Tan, John Yuen Shyi Peng, Min-Han Tan, and Hwee Kuan Lee. Automated renal cancer grading using nuclear pleomorphic patterns. JCO Clinical Cancer Informatics. 2018. 2) Automated Image Based Tumor Risk Assessment System for Hepatocellular Carcinoma The evaluation of both asymptomatic patients and those with symptoms of liver disease involves blood testing and imaging evaluation. We developed an automated image based tumor risk assessment system as part of a micro-array gene expression based prognostic stratification system for resectable hepatocellular carcinoma. Whole slide images of liver cancer tissue were divided into two groups namely “Low Risk” and “High Risk” by micro-array gene expression based prognostic stratification system. These slides were then immunohistochemically (IHC) stained for different biomarker proteins. We developed an automated image based system to analyse the biomarker protein content. Our system predicted a Support Vector Regression (SVR) based score for each IHC image after quantification and analysis of stain. Our system was able to predict a higher SVR score for “High risk” patients when compared to “Low Risk” patients. Published as: Oleg V. Grinchuk, Surya Pavan Yenamandra, Ramakrishnan Iyer, Malay Singh, Hwee Kuan Lee, Igor V. Kurochkin, Kiat Hon Lim, Pierce K. H. Chow, and Vladimir A. Kuznetsov. Tumor-adjacent tissue co-expression profile analysis reveals pro-oncogenic gene signature for prognosis of resectable hepatocellular carcinoma. Molecular Oncology. 2017 Link: http://onlinelibrary.wiley.com/doi/10.1002/1878-0261.12153/pdf 3) Gland Segmentation in Prostate Histopathological Images Glandular structural features are important for the tumor pathologist in the assessment of cancer malignancy of prostate tissue slides. The varying shape and size of glands combined with tedious manual task can result in inaccurate assessment. There are also discrepancies and low level agreement among pathologists especially in cases of Gleason pattern 3 and pattern 4 prostate adenocarcinoma. We have developed an intelligent software to improve accuracy and reduce labor of gland structure assessment on Haematoxylin and Eosin (H&E) stained prostate tissue slides. Our method can easily fit into the existing workflow of the pathologist. Prostate cancer glands with their varying shapes, structures, and size pose an extreme challenge for automated gland segmentation systems. Our method achieved an averaged Jaccard Index score of 0.54 (range is [0,1], higher value is better) while outperforming various existing softwares in the literature. Published as: Malay Singh, Emarene Mationg Kalaw, Danilo Medina Giron, Kian-Tai Chong, Chew Lim Tan, and Hwee Kuan Lee. Gland segmentation in prostate histopathological images. Journal of Medical Imaging. 4(2), 027501, 2017. Link: dx.doi.org/10.1117/1.JMI.4.2.027501 4) Automated Image Based Prominent Nucleoli Detection The diagnosis and prognosis of cancers are major issue for a trained pathologist. Inter-observer variability and tediousness of tissue reading hamper the accuracy of assessment by the pathologist. The analysis of prominent nucleoli is one of the main methods of cancer assessment. We have developed an intelligent software to improve accuracy and reduce labor of tissue reading of prominent nucleoli assessment on H&E stained slides. Our method can easily fit into the existing workflow of the pathologists work. Published as: a) Choon Kong Yap, Emarene Mationg Kalaw, Malay Singh, Kian-Tai Chong, Danilo Medina Giron, Chao-Hui Huang, Li Cheng, Yan Nei Law, and Hwee Kuan Lee. Automated Image Based Prominent Nucleoli Detection. Journal of Pathology Informatics 6.1 (2015): 39. (JPI) Link: http://www.jpathinformatics.org/article.asp?issn=2153-3539;year=2015;volume=6;issue=1;spage=39;epage=39;aulast=Yap and b) Malay Singh, Zeng Zeng, Emarene Mationg Kalaw, Danilo Medina Giron, Kian-Tai Chong, and Hwee Kuan Lee. A study of nuclei classification methods in histopathological images. International Conference on Innovation in Medicine and Healthcare. Link: https://link.springer.com/chapter/10.1007/978-3-319-59397-5_9
Big data, data science, machine learning, and now deep learning are all the rage and have tons of hype, for better—and in some ways, for worse. Advancements in AI such as language understanding, self-driving cars, automated claims, legal text processing, and even automated medical diagnostics are already here or will be here soon.
In Asia, several countries have made significant advancements and investments into AI, leveraging their historical work in HPC.
China now owns the top three positions in the Top500 with Sunway TaihuLight, Tianhe-2, and Tianhe, and while Tianhe-2 and Tianhe were designed for HPC style workloads, TaihuLight is expected to run deep learning frameworks very efficiently. In addition, Baidu of China probably has one of the largest AI teams in this part of the world, and it would not be surprising to learn that these large Internet companies are working closely with the likes of TaihuLight and the Tianhe team to develop their own AI supercomputers.
Japan is no stranger to AI and robotics, and has been leading the way in consumer-style AI systems for a long time. Remember that Fuzzy Logic washing machine? Japan’s car industry is probably one of the largest investors into AI technology in Japan today, with multiple self-driving projects within Japan and globally.
RIKEN is deploying the country’s largest “Deep learning system” based on 24 NVIDIA DGX-1 and 32 Fujitsu servers this year. Tokyo Tech and the National Institute of Advanced Industrial Science and Technology (AIST) have also announced their joint “Open Innovation Laboratory” (OIL), which will have the innovative TSUBAME3.0 AI supercomputer this year and an upcoming massive AI supercomputer named “ABCI” in 2018.
South Korea announced a whopping US $863M investment into AI in 2016 after AlphaGo’s defeat of grandmaster Lee Sedol, and this is an additional investment on top of existing investments made since early 2013 (Exobrain and Deep view projects). It will establish a new high profile, public/private research center with participation from several Korean conglomerates, including Samsung, LG, telecom giant KT, SK Telecom, Hyundai Motor, and Internet portal Naver.
Closer to home, Singapore has recently announced a modest US $110M (SGD $150M) national effort over five years to build its capabilities in Artificial Intelligence called AI@SG. Funded by the National Research Foundation of Singapore and hosted by the National University of Singapore, this is a multi-agency effort comprising government ministries, institutes of higher learning, and industry to tackle specific industry problems in Singapore. Besides a grand challenge problem (to be identified by end of the year), a major focus is on partnering with local industry to drive the adoption of AI technology to significantly improve productivity and competitiveness.
In particular, an effort called SG100 — for 100 industry “experiments” over five years — will work closely with industry partners to help solve their problems using AI and HPC with the combined efforts of all the government agencies and institutes of higher learning and research centers. As is typical of Singapore style, three big bets for AI have been identified in Finance, Healthcare, and Smart City projects. The compute backbone of AI@SG is expected to ride on new AI HPC systems and also leverage various HPC systems existing in Singapore, including the newly established National Supercomputing Centre.
AI being powered on HPC-style clusters is not an accident. It has been and always was a workload that HPC folks have been running — it’s just that it was not sexy to be associated with AI back then. Now, we can all come out of the closet.
About the Author
Laurence Liew is currently the Director for AI Industry Innovation in AI Singapore.
Prior to AI Singapore, Laurence led Revolution Analytics’ Asia business and R&D until its acquisition by Microsoft in 2015. Laurence and his team were core contributors to the open source San Diego Supercomputer Centre’s Rocks Cluster distribution for HPC systems from 2002-2006 and contributed the integration of SGE, PVSF, Lustre, Intel, and PGI compilers into Rocks.
Using Kubernetes and Redis to host and scale a 24/7 cloud AI application leads to optimal resource use
What is Speech Lab?
Speech Lab is a speech recognition engine developed in collaboration with National University of Singapore (NUS) and Nanyang Technological University (NTU). Its specialized code-switching functionality is able to transcribe multiple languages and dialects in a single conversation. This makes it well-suited for the local context and other communities in Asia where multiple languages are spoken. It can also be customized to suit a variety of industries which require speech to text for their specialized domains.
From Prototype to Production
Once the Speech Lab development team has successfully prototyped an offline service running on a Docker container, the next step was for us, the Data Engineering team at AISG, to package the service and deploy it in a production environment. This means scaling up the service as well as providing a demo platform for the public to access and try.
The original architecture of the prototype service is shown below.
As you can see, the pipeline is rudimentary and does not support multiple users.
The team set out to develop a new architecture with the following capabilities.
A service that runs 24/7 with minimum supervision.
Supports multiple speech-to-text models.
Able to scale up workers and process the workload in parallel when there are increasing workloads.
Notifies the front-end system upon the completion of processing.
Saves the results to a cloud storage.
Only utilizes the CPUs when there is a new request initiated from the front-end.
To realize this, we made use of Kubernetes and Redis.
Kubernetes is a popular open-source platform used to automate the management of applications running on a Linux cluster. Applications that are deployed on Kubernetes are able to leverage on its built-in features, like application health check, set restart policy and more.
Redis is an in-memory database. The application leverages on the lightning speed of Redis and uses it as a cache and queuing system.
After numerous discussions with the team members, we finally came up with a new architecture as shown below.
The new architecture runs on a cloud Kubernetes cluster. The cluster is configured with auto-scaling functionality. Based on the overall CPU usage, if it is above the threshold, a new node (VM/machine) is added automatically to the cluster pool. It will also automatically scale down the number of nodes when the workload is reduced.
Redis is used as the Ingress service for the front-end application. The front-end application pushes each audio file into the cloud storage and then triggers the job to run in the cluster by submitting a metadata payload through the Redis. Periodically, a listener reads from Redis and uses the Kubernetes API to create a job to run in the cluster.
Each job that runs on the cluster is created based on the speech-to-text model requested by the user. The model is selected and loaded from the cloud storage. This allows a model to be easily swappable with another speech-to-text model through the cloud storage user interface. Another reason for storing the models in the cloud storage is the high frequency of introducing new versions. With this implementation, there is no need to bring down the application to upgrade to a new speech-to-text model.
A job will only start executing when there are sufficient resources in the cluster. Otherwise, it is put into a pending state and waits until the resources are available. After completing the processing, a transcript file is created and saved to the cloud storage. A callback mechanism is used to notify the front-end applications that the job has been completed.
Each time a job completes a run, it will free up the resource. Deploying this architecture on the same production cluster which co-hosts other applications enables we to optimize the use of hardware resources.
By following industry standards in deploying containerized applications, Speech Lab Offline is able to minimize the cost of running workers 24/7.
A few weeks ago, AI Singapore hosted Andreas Deppeler, Adjunct Associate Professor at NUS Business School and Director of Data and Analytics at PwC Singapore, in a two-part webinar series for staff and apprentices. In four hours of lectures and Q&A, Prof Andreas walked the audience through the vast landscape of AI ethics and governance. In this article, I penned down the highlights of the sessions. If you prefer to go straight to the lectures, you can view the recordings at the end of the article.
What could go wrong with AI?
AI as a technology is both powerful and finding increasing applications in our lives. Drawing upon two primary sources – the work done by computer scientist Stuart Russell  and the privately funded organisation Partnership on AI  – Prof Andreas began with a comprehensive high level look at where AI might cause harm, intended or unintended. This was followed by a series of documented cases where problems in explainability, bias and security have manifested themselves in applications involving AI. From the examples quoted, it is worth noting that even the major technology players like Amazon and Apple have not been immune to committing such errors in their initial deployments.
Another area to pay attention to is the displacement of jobs due to AI automation. While experts generally agree that there will be disruption in the market, there is no consensus on the expected scale of it.
In the development of automated vehicles, the moral decisions that a machine has to make in collision avoidance and life preservation come under scrutiny. The Moral Machine experiment  was an attempt to collect large-scale data on how citizens of different countries would want autonomous vehicles to solve moral dilemmas in the context of unavoidable accidents. The results have been illuminating as it showed up distinct regional cultural differences when it came to deciding who should be sacrificed and who should be saved.
Ethics : Drawing up the principles
If we use, to achieve our purposes, a mechanical agency with whose operation we cannot interfere effectively … we had better be quite sure that the purpose put into the machine is the purpose which we really desire.
Norbert Wiener, 1960
While the concern that machines do not do what is “right” is not new, Prof Andreas traced the first serious conversation on safe and beneficial AI to the AI Safety Conference (2015) in Puerto Rico organised by The Future of Life Institute , a gathering of academics and industry players. The conference led to the publication of an open letter exhorting the development of AI that is not only capable but which also maximises societal benefit . Since then, several non-profit organisations for safe and beneficial AI have also been founded.
A second conference in 2017 in Asilomar, California, produced 23 principles covering wide-ranging themes in AI . Two years after that, the European Commission also presented its Ethics Guidelines for Trustworthy Artificial Intelligence in April 2019  and the OECD followed suit just a month later with its OECD Principles on AI. At almost the same time, the Beijing Academy of Artificial Intelligence (BAAI) also published its Beijing AI Principles.
From these publications, researchers have identified five common themes or overarching principles of ethical AI : beneficence, non-maleficence, autonomy, justice and explicability . Interestingly, subsequent work found that the first four principles correspond with the four traditional bioethics principles, and they are joined by a new enabling principle of explicability for AI .
Governance : Operationalising the principles
Typically, the principles and guidelines published are non-legally binding but persuasive in nature. To date, the German non-profit organisation AlgorithmWatch has compiled more than 160 frameworks and guidelines for AI use worldwide . It found that only ten have practical enforcement mechanisms. There is a need to go beyond the PR nature of the guidelines and operationalise them. On a related note, five types of risks that are already encountered or foreseeable have been identified : (1) ethics shopping, (2) ethics bluewashing, (3) ethics lobbying, (4) ethics dumping, and (5) ethics shirking . These risks undermine the best efforts in translating principles into practices.
Ethically Aligned Design
In March 2019, the IEEE launched Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems, First Edition (EAD1e) . It is a global treatise crowd-sourced from experts in business, academia and policy makers over three years. At almost 300 pages, it is organised into three pillars (reflecting anthropological, political and technical aspects) and eight general principles (human rights, well-being, data agency, effectiveness, transparency, accountability, awareness of misuse and competence).
Prof Andreas spent some time diving deeper into the sixth general principle – accountability. This is especially relevant to developers as AI applications have been known to deviate from their intended use and will likely continue to do so on occasions, despite the best of intentions. The question of the legal status of accountability inevitably comes up. For example, government and industry stakeholders should identify the types of decisions and operations that should never be delegated to AI systems, among other discussion points.
In February 2020, the IEEE announced the completion of the first phase of its work on the Ethics Certification Program for Autonomous and Intelligent Systems (ECPAIS) . It aims to offer a process and define a series of marks by which organisations can seek certifications for the processes around the Al products, systems and services they provide. This is a positive development, in Prof Andreas’ view, as he sees the possibility of Singapore contributing in this space.
Model AI Governance Framework
The Model AI Governance Framework  published by the Personal Data Protection Commission (PDPC) is the framework that most developers in Singapore are familiar with. The second edition was released in January 2020 at the World Economic Forum Annual Meeting in Davos, Switzerland. It is voluntary in nature and provides guidance on issues to be considered and measures which can be implemented to build stakeholder confidence in AI and to demonstrate reasonable efforts to align internal policies, structures and processes with relevant accountability-based practices in data management and protection. It consists of two guiding principles ( (1) AI that is explainable, transparent and fair, (2) AI that is human-centric ), and four guidance areas ( (1) internal governance structure and measures, (2) appropriate level of human involvement, (3) operations management, (4) stakeholder interaction and communication ) .
Open Source Tools
Beyond discussing principles, developers are most interested in the available tools that can help them in their work. IBM AI Fairness 360 , IBM Explainability 360  and IBM Adversarial Robustness 360  are open source Python libraries from Big Blue. Similarly, Microsoft has Microsoft Fairlearn  and Microsoft InterpretML . Developers can check them out and evaluate them for their own needs before developing their own Python packages.
Finally, here are the recordings of the two parts of the webinar series. Do catch the lively Q&A sessions at the end of each lecture when Prof Andreas fielded questions from our managers, engineers and apprentices.
Kidney failure is on the rise in Singapore with five new cases reported every day, and one new dialysis patient every five hours. Even more sobering than these statistics from the National Kidney Foundation is the fact that patients undergoing dialysis run a higher risk of hospitalisation. And by the time they are hospitalised, their mortality risk would have increased.
The ability to predict hospitalisation risk is therefore key to early medical intervention. But the current process is fuzzy and largely dependent on the experience of medical staff.
To tackle this problem, Renal Team, a healthcare provider for dialysis treatment, teamed up with AI Singapore under the 100Experiments (100E) programme in January 2019 to find a way to predict the hospitalisation of patients.
With the vast amount of data collected from each patient before, during and after their dialysis, there was potential in using this data to train an AI model to carry out this task. The prediction from the model could then be used as decision support for the medical team.
To develop the model, the 100E team input established medical knowledge into the model and trained it to mimic the thought process of medical professionals.
Capturing established medical knowledge Most medical readings are raw numbers which have to be translated into meaningful medical information. For example, there are guidelines on the healthy range for a particular parameter. An individual with blood pressure above 140/90 mmHg, for instance, would be classified as having hypertension. To give meaning to the raw blood pressure data, the numbers were converted into categories 0, 1, 2 or 3 for low, healthy, pre-hypertensive and hypertensive blood pressure, respectively.
Thinking like a doctor The team consulted medical professionals to understand their thought processes when assessing the hospitalisation risk of patients. They found out that medical professionals typically looked at trends in the patients’ medical parameters to assess their health conditions.
The team then created moving averages out of continuous raw medical data to capture these trends and enable the AI model to learn from them, thereby mimicking the thought process of medical professionals.
During a month-long trial, the AI model achieved 36 percent better precision in predicting the hospitalisation of patients compared with the medical teams.
Making better data-driven decisions The AI model has since been integrated into the systems at RenalTeam’s dialysis centres. It serves as a decision support tool for medical staff. Patients who are flagged out by the AI model are prioritised and re-assessed by senior nurses who will then decide if any medical intervention is needed.
Although a perfect AI model with 100 percent accuracy may never be achievable, the current model can be continuously improved with feedback from medical staff.
“We look forward to using and enhancing the AI model to help our care teams make better data-driven decisions to improve our care and reduce the hospitalisation of our patients,” said RenalTeam’s group managing director Chan Wai Chuen.
Click here for more details on the 100E Programme.
Product quality engineers at IBM spend an inordinate amount of time manually classifying the risk categories of products based on their return rates. Huge data volume, high data velocity, data veracity issues and poor quality data, together with the combination of structured and unstructured data, make this a formidable task.
An AI solution that had been implemented enabled IBM to achieve 80 percent accuracy in risk classification, but the company wanted to improve this by another 5-10 percent. And it managed to do this through a collaboration under AI Singapore’s flagship 100Experiments (100E) programme.
The 100E team comprised AI apprentices, AI engineers and project managers. Their aim was to develop an AI model that could classify the quality of hardware products more accurately as well as predict future product return rates.
Achieving over 90 percent accuracy Using image processing, deep learning and time-series analytics, the team created an AI model that was able to achieve over 90 percent accuracy in product risk classification, and enabled them to predict the return rates of different hardware products. The engineers could thus make decisions on measures they could take to better manage the products.
With the deployment of the AI product risk classification solution in October 2019, IBM was able to reduce the training for the AI model from 4 hours to just 15 minutes with IBM PowerAI. The time and cost savings were amplified by the fact that product engineers no longer had to sort through large volumes of data manually.
Julian Tan, a senior manager at IBM Analytics Solutions, was impressed by the significant improvements in the model’s performance and the minimum viable product that was delivered. “This collaboration has helped IBM solve our business problem as well as upskill our employees in AI,” he said. “Kudos to the entire AI Singapore team for their professionalism and insights into the world of AI.”
Click here for more details on the 100E Programme.
I caught up with Yvonne Soh, the co-founder and CEO of Noodle Factory, a homegrown startup with a mission to transform the way we learn and engage using conversational AI. Find out how she got started, the collaboration with AI Singapore and her plans for the company.
Below is a transcript of the conversation [*].
Basil : Thanks for coming, Yvonne.
Yvonne : Thanks for having me.
Basil : So, Yvonne, you are a self-described dreamer, dog lover, tech junkie and lifelong learner, among other things. Could you tell us even more about yourself? 🙂
Yvonne : Sure! Well, I think the first thing that comes to mind is that I’m not from a traditional IT or even entrepreneurship background. It all started back in school, where in university I actually studied philosophy. So that was my major.
Basil : Oh, so you came from a humanities background?
Yvonne : Yes, correct. So I liked to read, that’s one thing, but in any case, when I started my career, somehow I ended up in an IT career from the start. And I found it to be very interesting. It had a steep learning curve, and I started off doing more general functions like sales and marketing, but I actually went into a more technical function when I joined NCS (National Computer Systems) many, many years ago – I’m not very young right now (laughs). And that’s my other point as well. I think, a lot of the entrepreneurs today, especially in the tech scene are pretty young, sometimes quite early in their careers, whereas I only started on this journey with Noodle Factory a couple of years ago.
Basil : So, what is Noodle Factory? I know that it is engaged in doing conversational AI or popularly known as chatbots. How did it all get started and what is its mission?
Yvonne : Good question. Firstly, I think a lot of people always ask me why the name Noodle Factory and it links to the why and how this got started, because actually, slightly over ten years ago, me and my co-founder started another company called Jam Factory which does training.
Basil : Jam as in the jam you spread on your bread?
Yvonne : Yes! (laughs) Jam Factory does corporate learning and development. We deliver customised, bespoke workshops and training to companies, essentially. And because both of us, me and my co-founder, are from a tech background, we’ve always been really into technology, we like technology and playing around with it, so we always believe that, you know, we’ve all attended so many training where we go, we’re really energised, and then a few days later, we kind of forget everything that we’ve learned. So that’s also how the name Jam Factory came about because we wanted to produce sticky learning solutions.
One of the things that we always did was that with every training workshop, we always coupled it with technology. So, for example, in the earlier days we did things like e-learning, we even had mobile apps to complement the training. And in recent years, we actually started trying to use chatbots, because a better way to complement a training experience is to really extend the learner engagement with the facilitator or the trainer and the chatbot could ideally simulate that virtual mentoring role.
We started doing that about two years ago before we started Noodle Factory, and then we found a problem. Chatbots are great, they can be conversational, but most of them are really bad because they’re very difficult to set up. Most of the chatbots today actually require you to think of the questions and come out with the answers, so you have to think of the entire user flow : what are the questions people could ask and also what are the answers you want to give? That’s actually where we came up with the idea for Noodle Factory, because we were thinking,
Why can’t we just use an NLP (natural language processing) algorithm to read content?
Most of the time, training has content written in some form or other and that material can be used to train the chatbot to answer questions that users might ask. We started looking around and we found some off-the-shelf algorithms done off the Stanford data set. We played around with it. It could do a good level of understanding with basic English text. That’s how we started Noodle Factory, coming out with a chatbot that is not just able to converse, but actually makes things so easy for the administrator to set up that your chatbot can be used not just for FAQs but for anything, including a learning environment.
Basil : So what would the organisation or the user have to do to enable this chatbot. What are the easy steps that they would have to do?
Yvonne : Our vision for it is that essentially, as an administrator, if you think about it, any company or even anybody will have documents – it could be your website, it could be your product manuals, it could be training slides, so what if you could just drag-and-drop any of these content that you want your chatbot to know? Essentially, in the background your chatbot is reading it, understanding it and able to answer questions that your users ask.
Basil : This is very in line with your personal values as a lifelong learner. I think in today’s age, it’s not about getting the information – the information is there – it’s just overwhelming. It is getting the relevant information. I think this conversational AI really helps to narrow down the search space so that you get to the information that is relevant to you very quickly. So this is what you guys set out to do, right?
Yvonne : Yes, that’s exactly it. I mean, over times, I think everyone is changing, especially with technology, and if you look at it now, people’s attention spans are very short. If I go for a class that lasts two days, it’s actually a very long time, and that’s why you see a lot of times people are on the email, on their phones… They are not really paying attention, but yet they want this information. They only want it when they need it, anytime, anywhere. and the chat interface is actually a great way to engage users because, you know, sometimes it is good to read up and learn things, but sometimes the fastest way to learn something is to ask someone who has the knowledge, and that’s what we’re going for.
Basil : Yeah, I suppose, like in the last twenty years with the Internet becoming a part of everyday life, information is at the click of a button. I think we’re moving onto the next phase where we want an intelligent way of getting the information that we really need.
Yvonne : Yes.
Basil : Coming to AI Singapore. So, AI Singapore is running this 100 experiments (100E) programme. This is the flagship programme we are running to help organisations in Singapore accelerate their AI journey. Examples of the ways that we help organisations are to help them deploy AI to improve their productivity or even to introduce new services. I think these two points are actually covered by Noodle Factory in the services that you guys provide in enabling other organisations to deploy a conversational AI. I understand that last year (2019) you got onto the 100E programme of AI Singapore. How did this collaboration come about?
Yvonne : I guess it ties back to my lifelong learner personality, in a sense that, you know, when we started this company we knew there were a lot of things that we didn’t know about, but that didn’t really put us off because it actually excited us, it gave us an opportunity to learn in this space. And we knew that although this was possible, we weren’t really sure how we were going to do it. So, we were trying to find a way to be able to develop what we wanted to develop. We attended some of AI Singapore’s events, one of them was the AI for Everyone, and it so happened that Laurence (Director of AI Innovation at AI Singapore) was the speaker at the event. I actually met Laurence more than twenty years ago when I was at another company. We worked together. At that time we were in a data centre working on physical servers together. And I think we were bidding for a deal together, and that was how I got to know him. So, when I saw him, I was like, “Hey!” We started chatting, and I told him a little bit about Noodle Factory and some of the challenges we were facing. He thought that it would be a good fit for the 100E programme, so we started to explore it a little more.
Basil : So as a result of that conversation, Noodle Factory and AI Singapore embarked on a seven-month collaboration which involved our AIAP, the AI Apprenticeship Programme, where we have apprentices – people who are relatively new to the AI space – to get a chance to work with a real world problem with real world data. How was the experience working with the team in AI Singapore?
Yvonne : I would say that it’s been a really, really good experience. We started initial discussions in March. From then on, it was already very structured. We were told what kind of information we needed to provide, because obviously we are providing the data set from our years of experience with the training company that we’ve had, with Jam Factory. Also, what were the expectations we had, and then what we could expect of the programme, like, who the apprentices were going to be, and essentially how we will be engaging. Once it started, and I think we really kick started the whole thing in May of last year, we had very good and frequent engagements. There were regular sprint reviews which were every three weeks done at either locations (Noodle Factory or AI Singapore), but beyond that there was also constant communication. And I think one of the good things about the programme is that it is not just research based or meant to fulfill certain tasks that we set out at the start. But even over time we kept explaining what was our business vision and how it tied into our overall product and platform. So that was something that the team understood, including the apprentices. So there were always adjustments being made to make sure that you actually met the business objectives as well.
Basil : Seems like it was a very enjoyable collaboration. In fact, I caught up with two of the apprentices, and this was what they had to say about you.
Yvonne was very kind and encouraging throughout the engagement. I enjoyed working with her team and was able to learn a lot about modern NLP concepts thanks to the project.
This was from Meraldo. He is currently a data scientist at Aliz Technologies, a solutions provider for data warehousing. And also Cheng Heng, another of the apprentices, he had this to say :
Working with Noodle Factory was an eye opener, as I got to work on the project from start to deployment.
So this is like what you mentioned. It’s not just research. We are actually deploying real solutions out there. By the way, Cheng Heng is now employed in Wärtsilä, a Finnish firm providing solutions for the marine and energy sector.
Yvonne : That’s good. I think the apprentices were very good. They were very enthusiastic. Obviously, they were all new to the field, but it didn’t feel that way. They were able to articulate and really pick up a lot of new things and contribute a lot to what we were trying to develop. And I think I would say safely that it was a two-way learning which was great because on our side of the Noodle Factory team, we really learned a lot on AI and how to go about the development process and the research that was done by AI Singapore. On the other hand, I hope that the apprentices from AI Singapore also learned not just the AI concepts, but how they apply to real life scenarios as well.
Basil : So now that this has taken off, I must congratulate you because I heard recently that Noodle Factory has won a proof-of-concept with HDB (Housing Development Board). Could you tell us a bit more about this win?
Yvonne : Definitely. It’s a small win, but we are very happy with it, because one of the things that we always tried to steer away from, is that we’re not just a chatbot (company). Although, I think, sometimes you use the word conversational AI, sometimes you use the word chatbot, but essentially, a chatbot generally has quite a bad perception in the market. When you say chatbot, a lot of people are like “uh, no, I don’t need it”, and that’s because they’ve had very bad experiences with chatbots. A lot of times we go onto a website, we ask the chatbot something, it says, “sorry, I don’t understand” or it just throws you back a bunch of links, and that’s actually what we wanted to steer away from. For us, it is not just an FAQ engine that can only respond to fixed things, but really what we actually see it being used for is almost like an information officer that knows everything because it’s so easy to ingest information and automate that whole content curation process.
Why we were so happy about the win with HDB is because they are actually using it in the way that we really envisaged the platform to be used. One of the things is that HDB is already familiar with chatbots, they have chatbots on their website, so it is nothing new to them. And yet, they were looking for a solution where they could also use chat to engage their employees. That’s what we always see our platform becoming. So, when they heard about the algorithm that we have developed together with AI Singapore, on being able to automatically ingest documents and understand it, they were very keen to pilot it for internal use, to train their employees on internal policies.
Basil : So, this is about HR policies?
Yvonne : I think what they’re starting out with are finance policies.
Basil : Ah, okay. So, it’s a lot of very dry documents out there and you have to fish out the relevant information, so it really helps to have a conversational AI to assist you in this.
Yvonne : Yes, and I think with the government there is a lot of information normally written in documents, but a lot of times you don’t really read them, or you read, but you don’t really register it. But when you need the information, you have to look for someone to ask. So instead of trying to ask your finance person or your HR person – they may get a lot of these repetitive questions – if they could automate that whole process by having a intelligent chat agent to do that, that would be great.
Basil : So what are the other organisations which will finding great benefit in implementing a conversational AI within their organisation.
Yvonne : I think that most of the knowledge-based companies, for example, any company that deals with technology, is actually a great place to start, because a lot of them have a lot of information – could be product information, training information, just a lot of information, and the best thing about tech companies is that a lot of that information is already digitised, so we don’t have that problem that they only have paper documents or something like that. So, they could essentially just drag-drop any of their documents and you have that chat agent be trained with all the information, it becomes almost like an information officer within the company. Another case where we feel that there’s a lot of use, is with education. We won a tender recently, late last year with SIM (Singapore Institute of Management). They have two deployments of the chatbot. One is for use as more of a customer service kind of chatbot, which is a typical use case. The other one is very exciting for us. They’re going to use it to almost function as a virtual teaching assistant and it’s going to help the lecturer to teach economics to the students.
Basil : Oh, that’s interesting. I’ve not seen this particular use case before. That means that the lecturer will lead the discussion and the conversational AI will come in to assist in answering the students’ questions. Is that how it works?
Yvonne : No, I think the the classroom engagement will not change. It will still be the lecturer giving a lecture. But a lot of times, I think students need engagement beyond the lecture itself.
Basil : Because the lecturer is only one person, right?
Yvonne : Yes.
Basil : You could have a hundred students out there and there’s no way the lecturer can have a one-to-one with all the students within a certain time frame.
Yvonne : Yes, correct. So a lot of times, a lot of lecturers, they are very good, they actually do WhatsApp with their students, but it’s one lecturer to many, many students, which is not possible. So one of the things that SIM has done, they actually did an internal study with their students, a focus group session on where they think a virtual teaching assistant could help. One of the main things that the students said was, common questions, for example, explanation of concepts – what is the definition of this concept … maybe giving examples related to the concept, even doing little quizzes with them. So almost like a tutor kind of role. So that’s how we are going to deploy a chatbot in their organisation.
Basil : Ultimately, the lecturer will still steer the education, right?
Yvonne : Yes.
Basil : I think this is a very good example of AI (use) … when you mentioned AI, a lot of people fear that it will take their jobs, but actually it really is an assistant. As what Laurence, our director, always says, “AI will not take away jobs, it will take away tasks”. If you can see it in this way, it will not replace your job, but it will replace people who are not able to incorporate AI within their job scope, because they will be less competitive than those who are able to do that. I think this is the direction that we’re going and we want to educate the general public that AI is here to stay and is here to help you do your job better.
Yvonne : Yes, exactly. In fact, it is here to help you do your job better, faster, and also offload things that you actually don’t like to do. For example, if you are lecturer, you probably don’t want to reply to the same question over and over again. (laughs) Same for a HR person or finance officer … or maybe mark quizzes, you know, that could be something that the AI could help the lecturer to do as well, yeah.
Basil : So, what are your future plans for Noodle Factory, now that, you’ve come so far already?
Yvonne : I think one (thing) is that we’re always trying to improve the user experience, both from the administrator standpoint, I think a lot of times we did start from the administrator standpoint – firstly, we wanted to make it easy. That’s why we had the drag-and-drop capability – but I think also from the user experience, because the administrator will only be happy if his users are happy because he spends time setting up this chatbot, so he wants the users to engage it and to really like to use it as well. So, for us, a lot of the focus now is on the user experience, making it very easy. We’re actually in the process of designing a very good wizard that can really enable you to set up a chat agent in minutes. So one of the things that I didn’t talk about earlier is the name Noodle Factory. A lot of people ask, why is the company called Noodle Factory? Apart from the fact that I also like to eat and I like noodles, noodle is sometimes used as the slang for brains. I don’t think it’s very commonly known, but essentially, that’s one of the aims, to really automate that whole process of being able to develop your brains properly as well. Also the other thing, one of the slogans you might see on our website is that it is as instant as a cup of noodles. (laughs)
Basil : That’s my impression, so you have a second explanation, like the brain? 🙂
Yvonne : Yes. (laugh) That’s really what we’re aiming for as well. With the wizard that we’re developing, as well as all the UX improvements, in as long as the time that it takes for you to make a cup of noodles – three minutes, essentially – is the time it will take you to set up the chatbot.
Basil : Oh, that’s really fast. So, for organisations out there, how should they contact you, so that you can assist them, look at their use case and see whether you can help them deploy a conversational AI in their organisations?
Yvonne : Yes. We have a freemium model that we have recently launched. If they go to our website (https://www.noodlefactory.ai/home/), they can click on the Free Forever option and sign up for that. That gives them some limited functionality, but it basically is enough to set up a full fledged chatbot on their website. What we always do is, we focus a lot on the end user experience, so we have a dedicated customer success manager, who basically will follow up with the clients on a regular basis to see how they’re doing with the set up, how is the user engagement and how to improve on that. So that’s available even for the free sign-ups as well.
Basil : Thanks for coming today, Yvonne! It has been really fun talking to you.
Yvonne : Thanks! I enjoyed myself too.
[*] This conversation was transcribed using Speech Lab. The transcript has been edited for length and clarity.