Get SGD58.85 off your premium account! Valid till 9 August 2021. Use the Code ‘SGLEARN2021’ upon checkout. Click Here

AI Summer School 2020 Thrives in Virtual Setting with Innovative Tweaks

AI Summer School has grown, COVID-19 notwithstanding. More than 250 attendees from 30 countries took part in this year’s event, with leading experts in the field of artificial intelligence (AI) coming together to share their work and foster collaboration amongst the next generation of researchers. The latest edition, which took place from 3 to 7 August, follows the success of the inaugural event last year which attracted 142 attendees from 15 countries.

Held in a virtual setting because of the pandemic, AI Summer School 2020 featured innovative tweaks to the traditional format to enable students, academic researchers and industrial practitioners to explore exciting possibilities surrounding the use of AI in real-world application domains and raise awareness of data innovation challenges and issues.

Unconference Sessions

“The main difficulty this year was in providing opportunities for participants to interact,” said Dr Stefan Winkler, Chair of Organising Committee and Deputy Director of AI Technology, AI Singapore. To address this, “Unconference Sessions” were held to give participants an opportunity to break into small discussion groups with the flexibility of exploring different groups based on their interests.

“The main aim was to facilitate the exchange and cross-pollination of ideas from a ground up rather than a top down approach,” he explained.

A general “Hangout” table was created to help participants navigate to topics they may be interested in, while Individual topic tables gave participants the option to start or join a table with its own Zoom session and Google Doc sharing.

Poster videos

Poster video sessions were also held, where participants shared a video of their AI-related work on Youtube and facilitated discussions on their project by replying to comments posted on the social media platform.

Awards were presented for the three top poster videos. Christian Alvin H. Buhat from the University of the Philippines Los Banos received the nod for his animated agent-based model of COVID-19 infection inside a train wagon. The second award went to Jiafei Duan from the Artificial Intelligence Initiative at the Institute for Infocomm Research, A*STAR, for his video on ActioNet. This is a platform for task-based data collection and augmentation in 3D environment, which has the potential to catalyse research in the field of embodied AI.

The third video poster that caught the judge’s eye was M Ganesh Kumar’s presentation on schemas for few-shot learning, which involves feeding a learning model with a very small amount of training data. Ganesh is from the Graduate School of Integrative Sciences and Engineering at the National University of Singapore (NUS).

DinerDash challenge

Another unique component introduced in AI Summer School 2020 was the DinerDash challenge, which was organised as part of the Reinforcement Learning workshop on Day 2. This is a game where a single waiter makes complex decisions on customer seating arrangements, taking orders, serving food and many others. Participants worked in small groups to test reinforcement learning baselines and competed with one another for the highest score in the DinerDash simulator.

For Ong Chi Wei, a Post-doctoral Research Fellow from the Department of Biomedical Engineering, NUS, this was the best experience he had at the summer camp. “The key takeaway for me was the Reinforcement Learning (RL) Diner Dash challenge. It was well organised and interesting. We were required to submit our proposal on the same day after the question was posted using reinforcement learning. I learnt from my teammates and we managed to solve the problem with different algorithm testing. Overall the challenge made us think creatively and how to work as a team to solve AI problems.”

Distinguished alumni

Adding gravitas to Summer School were presentations by experts in the field of AI.

In a keynote on “AI @ Scale – Trends and Lessons Learnt from Large-scale Machine Learning Projects”, Dr Tok Wee Hyong, Principal Data Science Manager at Microsoft Corporation, shared his insights into key trends in machine learning and deep learning, grounded in practical experience evolving AI ideas from proof of concept to production at some of the world’s largest Fortune 500 companies.

Dr Tok, who is with the AzureCAT team in Redmond, was one of three overseas-based speakers who are alumni of NUS and Nanyang Technological University (NTU) graduate schools, who have gone on to carve out a distinguished career in the field of AI. The others are fellow Singaporean Dr Yi Tay, a research scientist at Google AI, Mountain View, and Dr Trọng-Nghĩa Hoang, a research staff member at the MIT-IBM Watson AI Lab, IBM Research Cambridge.

Personalised learning at scale

The second keynote at the event was delivered by Prof Zhai Chengxiang, Donald Biggar Willett Professor, University of Illinois at Urbana-Champaign. His presentation on “AI for Education: Towards Personalised Learning at Scale” highlighted the exciting opportunities for applying AI techniques to transform education to make it both more affordable and more effective.

In other sessions, speakers shared their work in areas such as federated learning, self-supervised deep learning, multi agent interaction, Gaussian processes and low-resource machine learning, and also covered AI applications in sectors such as healthcare. Additional important aspects of AI such as ethics and governance were discussed too, as were career-related topics such as job hunting and entrepreneurship.

The plus side of a virtual camp

Looking back on AI Summer Camp 2020, Dr Winkler felt the virtual format had its advantages. “We could not hold any social events, such as the buffet dinner and Night Safari outing that we had last year, but on the plus side, we could offer much lower registration fees, and open up the school to a larger number of people with no auditorium size constraints.”

New AI Makerspace at Singapore Polytechnic

To meet the rising local demand for Artificial Intelligence (AI) skills and to assist local industries in their digital transformation, Singapore Polytechnic (SP) through its Data Science and Analytics Centre (DSAC) has collaborated with AISG to set up an AI Makerspace on the Dover Road campus.

The new AI Makerspace, which is a satellite node of AISG’s existing Makerspace, will allow SP students including those from the Diploma in Applied AI & Analytics the opportunity to intern and be mentored by AISG engineers and DSAC staff to leverage AI Bricks to build AI solutions. 

As part of the collaboration, DSAC will work with AISG to offer relevant training courses as well as AI Clinics for Small, Medium Enterprises (SMEs). Through the courses and clinics, employees and owners of SMEs will have a better understanding of AI and Makerspace’s AI Bricks, and learn to harness AI solutions to increase productivity and business opportunities.

The latest AI Makerspace is timely as the Services and Digital Economy Technology Roadmap cites AI as one of the key technology areas that will change the world and take Singapore’s economy forward in the coming years. As Singapore embarks on its Smart Nation journey, SP is glad to partner AISG to equip our youths and companies with much needed technological skills that will help our industries transform.

Ask JARVIS – The Personalised AI Agent for DHL Care

Prototype developed for AI in Health Grand challenge helps pave the way for predictive care, personalised care and patient empowerment

What is the likelihood of a Diabetes, Hypertension and hyperLipidemia (DHL) patient developing complications over the next five years, and what are the factors that contribute to this risk? A JARVIS-DHL prototype developed for the AI in Health Grand Challenge has the answers.

Launched in June 2018, the AI in Health Grand Challenge seeks to explore how AI technologies and innovations can help solve important problems faced by Singapore and the world. The focus was on healthcare, and the challenge was on how AI can be used to help primary care teams stop or slow disease progression and complication development in 3H (hyperlipidemia, hyperglycemia, hypertension) patients by 20 percent in five years.

It is estimated that 3H is present in up to 20 percent of the adult population in Singapore and will rise with an ageing population, leading to an increase in healthcare spending and impacting the quality of life of those who are affected.

About JARVIS-DHL

JARVIS-DHL is one of three proposals that have been awarded. It is led by the researchers from the Institute of Data Science NUS (IDS), in collaboration with SingHealth Health Services Research Centre (HSRC), Singapore National Eye Centre (SNEC), National Heart Centre Singapore (NHCS) and Duke-NUS.  JARVIS-DHL aims to build a consolidated AI platform which can be used to improve the 3H care delivery process by facilitating practice of evidence-based personalised care and shared-decision making.

The researchers’ focus was on transforming local DHL primary care through the following three-pronged approach:

  • From reactive to predictive care by enabling accurate predictive stratification of DHL patients
  • From “one-size-fits-all” to personalised care by enabling customised care based on local and individual contexts
  • From passive to active patients by enabling patient education, self-care and monitoring

Benefits to Primary Care Teams

For primary care teams, early screening and risk stratification enables them to right-site care for 3H patients instead of relying on the reactive event-driven sequential referral model. This allows patients to spend less time in healthcare institutions, and also enables healthcare resources to be put to optimal use.

By facilitating evidence-based personalised care and shared-decision making, JARVIS-DHL also enables primary care physicians to work with patients to increase treatment adherence. For example, the system is able to recommend evidence-based treatment options, quantify personalised treatment benefits and the risk of complications, and adapt the treatment regimen based on the patient’s lifestyle. This helps alleviate the patient’s anxiety over perceived side effects and support holistic clinical decision-making.

Benefits to Users

Through the use of technologies for patient education, self-care and monitoring, patients are empowered to take ownership of their healthcare journey beyond their visits to the clinic, supporting a shared decision making with primary care physicians.

12-month report card

The team obtained access to local clinical datasets pertinent to their research and went on to develop the prototype for JARVIS-DHL, a consolidated AI platform which can be used to improve the care delivery process by facilitating evidence-based personalised care and shared-decision making.

The prototype incorporates a diabetes risk calculator that can compute the risk profiles of DHL patients that are likely to develop complications over a five-year period. The system gathers local primary care data as well as healthcare and lifestyle tracking data to create AI algorithms and models that can help identify at-risk patients. It identifies the specific factors that contribute to their risk and stratifies patients into various risk groups for the delivery of predictive care.

Next Steps

Whilst advancing AI research is a key goal of the AI Grand Challenge, one of the important takeaways for the team was the need to balance the aspirations for cutting-edge AI research against its practical impact in clinical applications.

With this in mind and as they approach Stage 2 of the development, the team has adopted a balanced approach that will deliver practical real-world impact as it validates and refines its AI model for deployment in clinics.

For more details, please visit https://aisingapore.org/grand-challenges/awardees-jarvis-dhl/

 

About the Team

Lead Principal Investigator: Prof Wynne Hsu (NUS)

Co-Principal Investigators:

  • Professor Ng See-Kiong (NUS)
  • Professor Lee Mong Li (NUS)
  • Associate Professor Chee Yong Chan (NUS)
  • Professor Wong Tien-Yin (SingHealth)
  • Professor Marcus Ong Eng Hock (SingHealth)
  • Associate Professor Tan Ngiap Chuan (SingHealth)
  • Dr Teh Ming Ming (SingHealth)
  • Adjunct Associate Professor Yeo Khung Keong (SingHealth)

Host Institution: National University of Singapore (NUS)

Partner Institution(s): SingHealth Group (SingHealth)

In 2019, the team published papers for top international AI platforms such as the Conference on Computer Vision and Pattern Recognition (CVPR), the IEEE International Conference on Image Processing (ICIP), and the IEEE International Conference on Tools with Artificial Intelligence (ICTAI). It has also received a request from the American Diabetes Association (ADA) to feature JARVIS-DHL in the association’s Thought Leadership Film Series.

The AI in Health Grand Challenge

The AI in Health Grand Challenge is a five-year, two-stage programme with a total funding quantum of $35 million. AI Singapore, together with an International Review Panel, selected three projects to be awarded Stage 1 funding of $5million per project for the first two years. The projects focused on applying AI technologies in innovative ways across the continuum of 3H (hyperlipidemia, hyperglycemia, hypertension) care.

Taking A Leap of Faith From Cancer Biology to AI

As a PhD student in cancer biology, Simon Chu took a huge leap of faith when he dived into artificial intelligence (AI) without any formal background in computer science or mathematics.

The odds seemed stacked against him. “I knew that it will be difficult to get into an AI role with my background,” he said.

To get a foothold in the field, he signed up for the AI for Industry (AI4I) programme – a self-paced, self-directed learning programme which gave him a year of access to DataCamp, an online learning resource for data science and analytics.

“I studied religiously on DataCamp, completing one to two courses per day. I completed the coursework requirement for AI4I fairly quickly, and I went beyond that to further enhance my knowledge with other courses on Data Camp,” he recalled. As part of the old requirement for AI4I, he had to attend 2 face-to-face workshops. The first workshop was actually a talk on AI for Everyone (AI4E). The session was not too technical and he learnt about the product development cycle from the talk.

And although he did not make it through his first attempt at the AI Apprenticeship Programme (AIAP), the knowledge he accumulated gave him the confidence to apply once again.

A different lens

Today, Simon is well on the way to completing the AIAP, and the experience has been an eye opener for him. He finds that AI presents a different lens for understanding data and ways in which the world works. Instead of the hypothesis-driven approach which was central to biology experiments, his experience in AIAP challenges him to let the data tell the story, instead of finding the data to support a story.

His passion for AI has also grown as he developed a firmer grasp of ways to develop his own AI models. He is currently working on a Singlish language model in the field of Natural Language Processing.

Despite the progress he has made, family and friends still ask him why he chose to go into AI after spending almost a decade in the field of biology. His answer is that biology and AI are not mutually exclusive options, and he firmly believes that his years in biology have not gone to waste.

Bilingual in Biology and AI

As he picks up skills in AI, he understands that AI is a way of dealing with data, and that it needs to be applied within the context of domain knowledge. In this regard, a biology background enables him to “speak both languages” and there will be opportunities for him to return to it and apply his AI skills, he said.

He also emphasises the importance of staying “teachable”. In an industry that is evolving very quickly, where research papers written three to five years ago could already be outdated, passion in AI has to be accompanied by a willingness to keep learning, he said.

Sharing his experiences with others who are planning to come on board to re-apply for AIAP, he said, “When you have a sense of what the technical assessment/interview is like, you know exactly what you are lacking in terms of skills and knowledge. So work on improving those areas.”

Besides working on technical skills such as coding and machine learning concepts, it is also important to understand the product development cycle. “Attend talks and workshop organised by AI Singapore and other parties, they might be helpful,” he advised. “Don’t give up! If a biologist like me can do it (eventually), so can you.”

If you are keen to prepare for the AIAP, click here (Becoming an AI Apprentice – Field Guide)

 

 

Making Inroads into A Male-Dominated World

Traditionally, the field of artificial intelligence (AI) has been dominated by men. When the AI Apprentice Programme (AIAP) was first launched in May 2018, only two women took up the gauntlet. Since then, however, there has been a growing number of women proving their mettle in this field. Among them is Fiona Lim, one of 19 women who have joined the programme to date and has since graduated from AIAP .

First-hand experience

For Fiona, it all began in March 2019 when she was working as a data analyst at a consulting firm. Fiona had graduated from the National University of Singapore with a degree in Statistics but felt the need to build a stronger technical foundation for her role. Looking around for suitable online courses that could help her, she came across AIAP, which is run by AI Singapore.

The nine-month programme presented her with an opportunity that she could not pass up – a chance to do a deep dive into AI concepts through self-directed learning, learn alongside passionate mentors and peers, and apply the knowledge to a real industry project.

The end-to-end project would provide her with first-hand experience not only in developing AI models, but also in building the data pipeline and deploying it as an application programming interface (API).

In the beginning, Fiona found the going tough especially with her lack of experience in programming. However, her knowledge of statistics came in handy. “I was able to grasp learning content quickly, and the challenges and obstacles along the way actually motivated me because ultimately, what I wanted to take away was the learning experience,” she said.

In the process of trying to understand how an industry expert thinks and finding the best model that can automate part of human work, Fiona was also introduced to machine learning and deep learning models. This stoked her interest in research.

Reaching out to more women

After completing the AIAP in Dec 2019, Fiona started work as a research assistant in the field of Natural Language Processing at Nanyang Technological University. She hopes to use her new-found knowledge to one day build a machine-learning product that can help people communicate better, especially the elderly who may not be fluent in English.

She would also like to see more women joining the field of AI. At AI Singapore, she was given the opportunity to present to visiting guests and to share her experiences with female students through community involvement projects. She has also spoken with women who reached out to her on LinkedIn to find out more about AIAP and is a member of Women Who Code and Coding Girls. These are online communities where women share their experiences and give each other tips on conducting presentations, carrying out AI conversations and how to survive in the AI world.

For women who are keen to explore the field of AI, Fiona encourages them to give it a try, and not to be afraid to seek help when they come across difficulties. “There are plenty of people out there who are very willing to share their experiences and help you out. As long as you have the right attitude, never give up learning and are always give it your best, everything else will follow.”

Link up with Fiona.

If you are keen to prepare for the AIAP, click here (Becoming an AI Apprentice – Field Guide)

Discovering the Science behind Hyperparameter Tuning

Companies hire large teams of data scientists to manually tune hyperparameter configurations of deep learning models.  These parameters are used to control the learning process and the tuning is extremely tedious and time consuming since it involves training the model to know the resulting performance of each configuration.

For Bryan Low, an associate professor at the National University of Singapore’s Department of Computer Science, the burning question is: “Can we transform this process of optimising the hyperparameters of a machine learning (ML) model into a rigorous ‘science’?”

Prof Low is intrigued by this possibility, which will free up data scientists to work on results analysis and other more meaningful tasks. It also dovetails with his wider research vision, which is to enable “learning with less data”.

The quest for answers led him to delve deeper into the area of automated machine learning (AutoML), specifically Bayesian optimisation algorithms which help simplify and quicken the search for optimal settings by identifying which parameters are dependent on one another.

Tackling the fundamental questions

“Traditionally, it is considered an ‘art’ to tune the hyperparameter configurations of deep learning and ML models such as learning rate, number of layers and number of hidden units, so as to optimise their predictive performance,” explained Prof Low.

To transform this into a science, several fundamental questions had to be tackled. For example: How can Bayesian optimisation be scaled to handle a large number of hyperparameters and large batches of hyperparameter queries? How can auxiliary information be exploited to boost its performance? How can Bayesian optimisation be performed under privacy settings?

In seeking answers to these questions, one of the interesting things that Prof Low uncovered was that AutoML/ Bayesian optimisation tools can have many applications beyond the hyperparameter optimisation of ML models.

“There are many complex ‘black-box’ problems to which Bayesian optimisation can be applied, to reduce the number of costly trials/experiments needed to find an optimal solution,” he noted. Examples included optimising properties in material design or battery design, optimising the environmental conditions for maximising crop yield, the performance of adversarial ML, and single- and multi-agent reinforcement learning.

Multi-party machine learning

More recently, Prof Low has embarked on another line of research to achieve his vision of “learning with less data”. He is working in multi-party machine learning where a party with some data tries to improve its ML model by collaborating with other parties with data.

There are two key challenges involved in this. The first lies in having to combine heterogeneous black-box models without any knowledge of their internal architecture and local data, in order to create a single predictive model that is more accurate than its composite models.

One way to address this is to find a common language to unite the disparate models. This paves the way for the creation of a surrogate model from the different machine learning models, and has the potential to elevate machine learning to another level by combining multiple models to harness their collective intelligence.

The second challenge lies in trusted data sharing and data valuation, where Prof Low and his research team ask questions such as: “How can multiple parties be incentivised to share their data? How do we value their data?”

In this pioneering work, Prof Low has introduced a novel and intuitive perspective – a party that contributes more valuable data will receive a more valuable model in return (instead of monetary reward). To achieve this, formally-defined incentives such as fairness and stability have been adapted from cooperative game theory to encourage collaboration in machine learning.

His research journey

For Prof Low, research can be described as a hobby – one that that he has been pursuing for nearly two decades. During his final year as an undergraduate, it even replaced gaming as something that he would “naturally indulge in”, and he has not looked back since.

The field of AI/ML has likewise powered on. Prof Low remembers that when he first presented at the AAAI (Association for the Advancement of Artificial Intelligence) conference back in 2004, there were only 453 papers submitted for review. This year, there were 7,737.

Indeed, as his passion for research continues to burn, his chosen field of AI/ML has gone “from cold to scotching hot”.

Imbuing ML with Human-like Intelligence

How can a financial fraud detection model trained in one country be applied in another? How does mastery of C++ lead to rapid mastery of Java and C#?

Associate Professor Sinno Jialin Pan from Nanyang Technological University cited the first as an application of transfer learning, and the second as an analogy to explain how he would like to bring it forward.

Prof Pan believes that a machine can be said to be intelligent only if it has the ability to transfer learning. This is because the ability to learn and transfer skills or knowledge to new situation or context is a particularly strong aspect of human intelligence.

He first heard of the term “transfer learning” in 2006 as a PhD student working on a WiFi-based indoor localisation system using machine learning (ML) techniques. It referred to an ML paradigm motivated by human beings’ ability to transfer learning.

Guided by intuition

Intuition told Prof Pan that transfer learning could hold the answer to the WiFI localisation problem that he was working on. When doing experiments, he found that the distributions of Wi-Fi signals changed over time due to the dynamic environment and the use of different mobile devices. To ensure that a localisation system performs accurately, he had to figure out how to adapt a machine-learning-based model to the changing environment and different types of mobile devices.

Prof Pan set out to develop general transfer learning methodologies that would give machines the ability to learn by transferring knowledge across different tasks automatically. Unlike heuristic transfer learning methods which are designed for specific applications (such as image classification, sentiment classification, etc.), general transfer learning methodologies require two fundamental research issues to be addressed. They are: How to automatically measure the “distance” between any pair of domains or tasks, and how to design learning objectives based on domain/task-invariant information derived from the measurable distance.

Through his research, he found that kernel embedding of distributions was ideal for measuring the distance between domains or tasks. Based on this non-parametric technique, he developed several transfer learning methods to train a model on domain/task-invariant information and build a bridge between different domains/tasks for knowledge transfer.

Transfer learning in fraud detection

One of the many potential applications of this was in fraud detection. Prof Pan noted that ML techniques have been widely used to capture patterns in customers’ behaviours and build fraud detection models based on historical data. However, as behaviours are region-dependent, a fraud detection model trained with historical data from one region or country may fail to make accurate detections in another region or country.

At the same time, it requires a lot of historical data to train an accurate fraud detection model, and this may not be available in, for example, a new market. In this case, transfer learning is a promising technique to help adapt a well-trained fraud detection model to new regions or countries with only limited historical data.

But Prof Pan is still not satisfied. “Though many promising transfer learning methods have been developed, most of existing methods fail to accumulate knowledge when performing transfer learning,” he said. In other words, for each specific transfer learning task involving a specific pair of domains or tasks, the transfer learning procedure has to be run from scratch.

Reuse of knowledge

What Prof Pan is now embarking on is an attempt to develop a continual transfer learning framework, where the machine gets “smarter” and “smarter” after solving more and more transfer learning tasks. He likens this to a computer science student spending six months to master the C++ programming language. After that when he/she wants to learn the Java programming language, he/she may only need to spend less than three months to master it.

If the student further wants to learn the C# programming language, he/she may only need to spend days to master it. “The reason behind this is that with the transfer of learning, the student’s understanding or knowledge of object-oriented languages becomes deeper after he/she learns Java, which also helps him/her to learn C# faster,” he explained.  

To translate this learning behaviour into transfer learning algorithms, the knowledge needs to be distilled and cumulated after performing each transfer learning task. A key research issue is how to represent knowledge in a more compact form after the “learning”, so that it can be refined and reused in the next transfer learning task. “In this way, knowledge can be accumulated, which makes machines’ transfer learning ability more powerful,” he said.

Understanding the Behaviour of Learning Algorithms in Zero-sum Games

In economic and game theory, zero-sum games are settings for perfect competition where the gain of one player is exactly equal to the loss of the other. But what happens in environments where many intelligent agents – human or artificial – interact with one other? Do these systems attain a state of equilibrium or do they become chaotic? And what are the conditions that influence these outcomes?

These are some of the questions that Georgios Piliouras, assistant professor of Engineering Systems and Design, Singapore University of Technology and Design (SUTD), is trying to answer through his work on multi-agent reinforcement learning in games.

For Prof Piliouras, the focus on this research area stems from his fascination with how complex phenomena emerge from simple components, such as neurons coming together to form the brain, an ant colony self-organising and building complex structures, or how the global economy works.

“In every one of these cases we can create pretty reasonable models of the behaviour of the individual constituents of these networks,” he noted. “But when we scale them up, the global emergent behaviour can, in many cases, be unexpected.” 

Unexpected chaos

Prof Piliouras’s objective is to create a robust and scalable theory of how learning algorithms behave in general decentralised environments. One of the standard classes of these environments is the zero-sum game which lies at the core of many recent artificial intelligence (AI) architectures.

An example is Generative Adversarial Networks, where two neural networks compete against each other. One of them, the Generator, tries to create realistic looking images whereas the other one, the Discriminator, tries to predict whether the images presented are real world images or synthetic ones. “By having the networks compete against each other, we can create AI that produces very realistic looking images,” explained Prof Piliouras.

The same mathematical concept lies at the core of Alpha-Go/Alpha-Zero, the AI systems produced by DeepMind, which learned to master the game of Go through self-play. 

However, Prof Piliouras’ research found that many standard learning dynamics such as gradient descent (an optimisation algorithm that is used to update the parameters of a machine learning model) are unstable and in fact chaotic in zero-sum games. This suggests that zero-sum games and other similar multi-agent settings can be more complex than standard economic theory suggests.

New multi-agent AI architectures

To improve the performance of self-learning systems, Prof Piliouras is working to create learning algorithms that behave predictably and converge to equilibrium instead of behaving chaotically.

To date, he has co-authored several joint papers with researchers from DeepMind to leverage these ideas and create new multi-agent AI architectures.

His research group also published five papers in the Conference on Neural Information Processing Systems (NeurIPS) in 2019, with two of them receiving spotlight awards. The same year, the team received a best paper award nomination at the International Conference on Autonomous Agents and Multiagent Systems, which is the premier conference on multi-agent systems.

“Publishing in these top ML conferences provides a great opportunity for communicating our ideas to a wide audience and getting some valuable feedback,” said Prof Piliouras, who plans to keep probing deeper into the structure of multi-agent reinforcement learning in games.

“There are a lot of challenges and questions that we still do not quite understand especially when we have a large number of users and complex action spaces,” he said. “There is definitely a lot of exciting work to be done both on the theoretical as well as the experimental front.”  

Prof Piliouras counts himself lucky to have had the opportunity to collaborate with many brilliant researchers around the globe in the course of his work. “My research journey so far has been very rewarding,” he said. “I am happy with the progress we have made already on some of the fundamental questions in the area, and at the same time I am excited about where we are going next.”

Detecting Fraud in Travel and Personal Accident Insurance Claims

SOMPO’s AI Fraudulent Claims Detection system reduces the manual effort involved in the claims review process and helps expedite payments

Artificial intelligence (AI) is transforming the claims review process for travel and personal accident insurance. For insurers, it has the potential to deliver “100 percent fraud detection”. For customers, it means receiving claims payments more quickly, potentially within minutes.

Fraud has long been the bane of the insurance industry, and delayed claims payments a source of customer dissatisfaction.

The two problems are related. With some customers exaggerating or misrepresenting the severity of mishaps to increase their claims, insurance companies have their work cut out combing through new claims daily to detect fraud. The manual review process is tedious and time-consuming and affects the customer experience because it holds up simple claim requests that can otherwise be processed promptly.

To unplug this bottleneck, Sompo Holdings Asia (SOMPO), a market leader in Asia’s non-life insurance industry, decided to turn to artificial intelligence (AI).

 SOMPO collaborated with AI Singapore (AISG) under the 100Experiments (100E) programme to develop a fraud detection application that could automatically flag out suspicious cases for further investigation and identify valid claims as candidates for straight-through payment.

The machine learning solution was trained to process, identify and rank suspicious travel and personal accident claims. Executed daily, the AI model gives each claim request a fraud score. Suspicious claims are then handed over to internal specialists for further investigation.

90% improvement in fraud detection

It took just seven months for the AI Fraudulent Claims Detection system to go from conceptualisation and development to pilot testing. In June 2020, the system went live at Sompo Insurance Singapore, one of SOMPO’s subsidiaries.

It has proved its mettle, taking away much of the grind involved in claims reviews whilst delivering a 90 percent improvement in fraud detection. At least 20 percent of customers also had their claims processed and payment disbursed much more quickly. In the near future, it will be possible for this to be done within minutes.

The improved workflow also enables the company to focus more on talent development. Instead of getting employees to manually review claims, they can now use their time more effectively to hone their skills in analysing and investigating suspicious cases.

These breakthroughs have not gone unnoticed by the industry. At the Singapore Business Review’s Technology Excellence Awards in June 2020, the AI Fraudulent Claims Detection system received the AI Award for General Insurance.

Improving Product Quality Engineering with AI

IBM is a major player in the mainframe market. I had a remote chat with IBM data scientist Liu Lu about how AI Singapore and IBM recently collaborated to build a solution to help their product quality engineers improve product quality classification in their mainframe product line by making better use of their data.

Below is a transcript of the conversation [*].

Basil : Hi, Liu Lu. Thanks for being here today.

Liu Lu : Hello, thank you.

Basil : So, Liu Lu. You work as a data scientist in IBM. This sounds like a really cool job. How did you arrive at this role and what is a typical day like for you?

Liu Lu : I joined IBM as an intern in 2014 and became a regular employee in 2015, so technically this is my sixth year at IBM as a data scientist. In my current role, I work with domain experts to solve business problems by developing AI solutions for IBM’s supply chain.

Basil : I know that IBM is a pretty big organisation, so let’s just focus on the part of it involved in the 100E in which, of course, you played an important role. I understand that it involved the manufacture and delivery of products in IBM’s mainframe product line. So what was the problem that you guys wanted to solve?

Liu Lu : Well, IBM is a very old company and has been making mainframes since the 1960s. We provide our customers one of the most reliable platforms for mission-critical hybrid multi-cloud environments offering cloud-native experience. So our warranty on maintenance and storage systems can extend to five years and up to ten years depending on the customer warranty terms and agreements. For us, we need to guarantee the storage product reliability and in this case, our client is the IBM engineering team. We’re responsible for supply quality management and we drive quality improvements and establish quality matrices to review supply performance and also identify root cause analysis and provide effective corrective actions to quality issues.

Basil : So, the applications are very, very critical applications, right?

Liu Lu : Yes.

Basil : And the whole process, I suppose, must be very data rich and just waiting for a data analytics solution where applicable, right?

Liu Lu : Yes, we had two major challenges. Quality engineers have to deal with large data volume and high data velocity, and also data cleanness issues and now we have different formats of data like structured data or unstructured data, so now information is actually booming and we’re getting more information, but at the same time we might also be overwhelmed by information. How can we enhance our engineers’ capability to deal with all this information? Quality problem detection is really critical to our engineers and we want to improve the problem detection accuracy and efficiency, so that proactive actions can be taken to prevent client loss.

Basil : How about the collaboration with AI Singapore, because it was a 100E and we had a team of engineer and apprentices who worked with you guys. How did that come about?

Liu Lu : So, in this case we wanted to design an AI system to augment human capabilities and the business problem was to identify product quality issues and reduce the investigation time from one week to one hour. In order to achieve this goal, we needed to know what took engineers so long to do the investigation. Before we reached out to AI Singapore (AISG), our engineers’ job requires them to track product performance and so in IBM we did a lot of work to automate the data processing and automate the data visualisation and so on. But now, we found that there were too many charts for our engineers to view and find out which parts were having problems. Then we came up with this idea : why not train an engine to help engineers read the charts and classify the products into different categories? This will definitely augment our engineers’ capabilities. At the same time, AISG has the 100E programme and we understood that AISG has many experts specialising in deep learning, so this really aligned with our modeling objective. We talked to AISG and, in the subsequent collaboration, we were assigned apprentices to study the problem and help to design the model. So, in this case, we transformed the business problem into two technical problems. One was a classification problem. We were trying to build a model to categorise products into three different categories : high-risk, medium-risk and low-risk. A high risk product would be pushed to our engineers as an alert. Another one was a regression problem. In order to foresee the quality problems, we also designed a predictive analysis to predict a product future failure rate. A predicted failure rate exceeding a threshold will also alert our engineers so that proactive actions can be taken in time. So that’s basically what came about in our collaboration.

Basil : So, now you have defined the problems and you have assembled the team. The project kicked off and you have entered execution mode. What were the challenges that you guys had to overcome along the way?

Liu Lu : Well, one of the challenge would be how to manage concerns and priorities. In this project, there were multiple stakeholders and they had different focal points. We assigned a champion who was responsible for collecting requirements and prioritise the items. A second challenge, which is very common, was managing user expectations. In order to solve this, we organised design thinking workshops to get domain experts involved in the AI development process to ensure the pain points were fully understood and the solution was highly aligned with the requirements.

Basil : Yes, from all the projects that we have seen, it is not just about technical problems. It is also about managing the stakeholders. This is a very important part of the whole process.

Liu Lu : Yes.

Basil : Then, of course, technically there were also challenges, right?

Liu Lu : Yes, we actually learned a lot from the technical side. Let’s talk about the predictive analysis, for example. The Weibull distribution is one of the modeling distributions used by the industry for many years. When we started to build the model, that was what first came to our mind and we spent a lot of time on it. Well, the model didn’t really work very well for our products, especially for products which didn’t really have a lot of data to train with – the so-called cold start problem. In the end, we had a meeting with AISG and we decided to try out other methodologies. Surprisingly, a time-series model worked best in the end. Through this, we learned that understanding the domain is very important for data scientists. Do not lock your mind, stay hungry to discover more and keep a fresh eye to new approaches. Before we reached out to AISG, we had already trained a classification model. We wanted to improve the model performance five to ten percent more, which was actually very challenging. In this collaboration we found a better way to generate image colours which contributed a lot to the improvements. That was what we learned from AISG on the technical side.

Basil : That’s interesting. Could you share a little bit more of the technical details on the data and modeling part of the journey?

Liu Lu : Yes, sure. We can start with the data. Data is always the foundation before we move on to any analysis. In this case, the first challenge was the data size. Every month, there are millions of records coming in. We used data from the past ten years, reaching billions of records. So laptop CPUs were definitely not enough. We needed to use at least four GPUs to run the model. When we built the model, when we looked at time-series data, we always think about line charts. Well, in this case, we transformed the data into an image to display the product quality. An image is always more intuitive than numbers and it became more efficient to identify the product quality. This model, as I mentioned before, actually mimics the engineer’s view to identify product risk levels. So previously, engineers looked at tables of data to identify the problems, followed by line charts, but they got overwhelmed by the information – there were too many charts for them to view. So now, we changed to images and we trained the engine to classify all these images. Based on that, products were classified into three different risk level. We trained the deep learning model to understand the images and it was able to classify the images like our quality engineers. This is about the modeling part. For the evaluation trade-off, there are many ways to define colours in an image. Some require more complex algorithms, so in this case we traded off the training time and model accuracy. Actually, this is a very common trade-off in all model building processes. In the end, we achieved our goal. The model accuracy improved about ten percent with the training time still remaining at fifty minutes which was good enough for us.

Basil : So I see that it is a clever transformation of the original numeric raw data into an image form. And in developing the solution, the human remains in control but with enhanced abilities.

Liu Lu : Yes, because from my own perspective, all these AI technologies in the end can absolutely help to augment human capabilities. AI plus human, I think that works best for now.

Basil : So there is still a human in the loop in this solution – also an example of how humans and machine can work together to achieve better results.

Liu Lu : Yes.

Basil : Thanks, Liu Lu, so much for sharing with us IBM’s intelligent use of AI technology in the business process. I hope that we have a chance for future collaboration between AI Singapore and IBM.

Liu Lu : Thank you very much. It is my pleasure and it has been a good experience collaborating with AISG and I think we learned a lot as well.

Basil : So, thanks for being here today.

Liu Lu : Thank you.

[*] This conversation was transcribed using Speech Lab. The transcript has been edited for length and clarity.

Related Story

mailing list sign up

Mailing List Sign Up C360