How do you predict the travel time distribution of a user while factoring in the effect of a traffic disruption on a nearby road? How do you keep track of users’ locations at scale and identify those with similar travel patterns? These are just two of the many research questions that Professor Cong Gao of Nanyang Technological University is addressing through his work on enriched geospatial data management and mining.
It has been more than 12 years since Prof Cong first started focusing on this area of research, and he continues to be fascinated by the challenges and potential surrounding the huge and ever-growing volumes of such data.
“With the proliferation of technologies such as smartphones, stationary sensors and satellites, a flood of geospatial data is becoming available. Such data is enriched by multiple additional sources or contexts such as social information, text, multimedia data and scientific measurements,” he said. “This huge amount of enriched geospatial data holds the key to new and possibly useful knowledge.”
Today, Prof Cong continues to push the envelope in spatial data mining, developing new techniques that can be applied to different types of enriched spatial data. Some examples of this data include enriched point spatial data such as points of interest, trajectory data and region data.
In the traffic example, deep generative models have been developed to make use of enriched geospatial data to predict travel time distribution, future travel speed as well as the impact of a traffic accident on a nearby road.
For the user tracking scenario, Prof Cong and his research team used a novel probabilistic approach to model the spatial, temporal and activity aspects of human behaviour from the user’s historical mobility data. The model has been successfully applied to accurately predict users’ locations, identify potential persons of interest and predict the next location of a user.
Prof Cong has also proposed a similarity computation measure for trajectory data, which can be used to identify users with similar travel patterns. In the current COVID-19 pandemic, this ability to analyse spatial or spatio-temporal data is crucial for applications such as contact tracing and spread prediction.
In the area of enriched geospatial data management, Professor Cong’s work on spatial-textural indexing, published in Very Large Databases 2009, has been used as a benchmark for subsequent work on spatial-keyword query processing. The paper, which attracted more than 500 citations, opened up a new sub-area of research into spatial databases which has been followed up on by top researchers and international research groups. He has also built systems for managing both static and streaming enriched geospatial data to support various types of queries on such data.
Going forward, Prof Cong is looking to build a machine learning-driven database for enriched spatial data to support both data querying and more advanced data analytics. The new database system and accompanying techniques can be used to create value-added services based on large enriched spatial data, for various application domains such as the telecom industry, intelligent transportation and smart cities. “The ultimate goal,” he said, “is to invent enabling techniques to power the next generation of intelligent systems and unleash the enormous value of large enriched geospatial data.”
What can real-world graphs tell us? This is an area that Assistant Professor Bryan Hooi from the National University of Singapore is delving into as he researches methods for graph-based learning.
Graphs are a way of representing objects and the relationships between them, and they are ubiquitous in many areas – from representing relationships and interactions in social and information networks like Facebook and Twitter, to representing molecules for biomolecular applications.
Prof Hooi’s interest in graph-based learning was sparked when, as an undergraduate, he worked on a thesis project to infer the contact network of a disease (i.e. who passed the disease to whom) using virus genetic sequence information obtained from patients who had caught the disease.
However, he soon found that methods for handling graph data were quite limited. “Most were focused on modelling graphs as mathematical objects,” he said. “There was much less work done on learning from graphs involving real-world data, for example, the detection time for each patient and the genetic sequence data obtained from him/her.”
A major obstacle to applying graph-based learning in more diverse applications is the problem of “label scarcity”, which means that the learning algorithms require large amounts of data with suitable annotations (or “labels”). For example, to learn an algorithm for predicting the toxicity level of a molecule, we need a large dataset of molecules, along with their toxicity levels. However, these toxicity levels are costly to obtain because they have to be measured through laboratory experiments.
This posed a problem for graph-based learning. Prof Hooi noted that while learning methods were often successful when using data that was well labelled, a lot more can be done to develop effective methods for graph-based learning in label-scarce settings.
Prof Bryan Hooi
“My current focus is on making graph learning methods perform better when labels are scarce or absent,” he said. “This will greatly increase their practical applications in the real world, for example in recommendation engines, and anomaly or fraud detection.”
For example, Prof Hooi is working with a ride-hailing app company on methods for recommending to a user the next place that he or she may be interested in visiting. In this case, graphs are used to represent the relationships between users or locations, to allow for more accurate recommendations.
“Let’s say we know the last few places a user visited, and want to recommend where they should go next. Rather than thinking about the locations separately, we found that it is more effective to model the locations using a graph. For example, two shopping malls in close proximity may be related, as users tend to visit them together,” he explained.
“We found that treating the locations as a graph allows for more accurate recommendations. This can be beneficial for the user in terms of convenience and for the merchants by recommending them to users who have a high chance of visiting them.”
Another application of graph-based learning is for detecting fake ratings in online review platforms such as Amazon. On such sites, having reliable ratings is of key importance. For this setting, Prof Hooi has developed methods to detect sellers who falsely inflate the scores for a product by adding fake ratings or reviews.
In these use cases, an additional challenge is that the graph is not fixed but changes over time, so the graph-based learning method has to adapt to this as well. Other examples of temporal graphs that change over time include graphs of user interactions such as messages and posts in social networks, as well as student activities such as the viewing of videos during a massive open online course.
More generally, Prof Hooi has also found that making small and random changes to the graph can effectively increase the amount of training data. Such graph-based “data augmentation” can help a lot when labels are scarce, he said. Following up on this, he is researching on how to design better graph-based data augmentation methods which will preserve the important structures in a graph.
Through these research efforts, Prof Hooi hopes graph-based learning methods can be applied more widely to deliver a practical impact across more areas in the real world.
There is no widely agreed upon definition of a supposed “Smart City.” Yet, when you see city employees — in this case city-state employees — working in what are obviously smarter ways, “you know it when you see it.” One such example of a smarter way to work in a smart city setting is the way that employees of the Land Transport Authority (LTA) in Singapore are using a new generation of data driven, AI-enabled support systems to manage the city’s urban rail network. We spoke to LTA officers Kong Wai, Ho (Director of Integrated Operations and Planning) and Chris Hooi (Deputy Director for Communications & Sensors, a unit within LTA’s IT, Cybersecurity & Digital Services Group) to find out more about their roles in implementing the FASTER system, and to Meng Fai, Leow (Deputy Group Director in the Rail Asset, Operations & Maintenance Group) and Christine Wee (Principal Manager for Asset Management Data and Analytics under LTA’s Rail Asset, Operations and Maintenance Group) about the usefulness of the Rail Enterprise Asset Management System.
Kong Wai, Ho, Chris Hooi and the FASTER system
In mid-2014, LTA announced they were partnering with IBM to do an applied research project to create a design blueprint for the Fusion AnalyticS for public Transport Event Response (FASTER) system. The announcement also mentioned that two other private sector companies, a local telco provider and one of the rail network operators, would also be participating by providing data access. In 2016, after nearly two years of applied research trials and solution blueprinting and design, the FASTER project moved into the full-scale development phase jointly with ST Engineering and IBM. In mid-2018, the Land Transport Operations Center (LTOC) was re-launched with digital capabilities for comprehensive situation awareness and integrated response management based on the new FASTER system which was put into use at this time.
Kong Wai and Chris who worked on FASTER explained , “Our rail network operational challenge is to prevent interruptions to the regular flow of the trains as well as to the regular flow of commuters who are waiting on the platform to board trains, and those who are riding on trains. We need to be alerted as early as possible that an incident has happened or that it will soon happen, so that we can respond quickly and in a well-informed way. The key is to be alerted as early as possible so we can take actions that prevent the incident from turning into a major problem or a crisis.”
The duo elaborated how FASTER aids them. “Our LTOC serves as the bridge between the rail operators, LTA, and all relevant government authorities who would be involved in responding to any type of incident in the rail network system. FASTER gives us comprehensive situational awareness of the entire rail network and its linkages with other parts of our public transportation network. We get real-time alerts of service degradation that warns us that an incident is evolving. Once the incident occurs, we can see the situation and how the impacts are propagating to other rail stations on the network and assess what is happening. This gives us the ability to make informed decisions on how to deal with the problem. For example, FASTER helps us to quickly assess the impacts of responding to an incident by injecting additional trains onto the track to handle backlogs of passengers, or in more severe incidents, by advising passengers to board special bus bridging services that we quickly deploy to take them to rail stations not impacted by the incident.”
LTA shared, “For more than a decade, we have supported our long-term strategy of achieving smart urban mobility in a fast-changing digital world. This is the works of a joint ops-tech team focused on building and sustaining a land transport digital ecosystem through an extensive network of sensor feeds, analytical processes and commuter outreach channels, synergistically put together to deliver a people-centred land transport system.”
The team highlighted the distinguishing feature of FASTER which is an emphasis on a commuter-centric approach to IOT sensing, situation assessment and incident response planning. “In the past, we only used measurements of engineering parameters to determine the efficiencies of our rail network system (e.g., use of rail signals, fault detectors, and alignment to schedules). However, these don’t provide a direct measurement of the commuting experience on the ground, such as the number of times a passenger on the platform could not board an arriving train because it was too packed or the crowd size on the train platform or in the trains, and the duration of delays faced by the commuters. This gave rise to the need to explore more commuter-centric sensors with FASTER.”
They provided a more technically-oriented description of the FASTER system. “It is a big data AI platform which mines IoT sensor sources such as WiFi, cellular, farecard, train, and taxi, for early warning of potential rail anomalies in real-time. Using fusion analytics, FASTER provides round-the-clock visibility of the rail network. It detects unusual network events and provides automatic alerts in advance when service levels fall below the acceptable range. The system supports real-time monitoring and situation assessment which enables the LTOC team to evaluate the effectiveness of incident response measures such as putting additional trains into service or dispersing passengers stranded at a rail station with bus bridging. It does not stop with FASTER. FASTER’s outcomes are fed into a broader Command, Control and Communication (C3) system that draws insights across other land transport domains such as roads and buses for a comprehensive view of the land transport network to co-ordinate effective response to anomalies and incidents.”
LTA noted that when they first introduced FASTER in mid-2018, alerts on service degradations only predicted about 40% of incidents. With accumulated operating experience, continuous tuning and enhancements, the ability to provide a forewarning of an impending incident increased to 70% by early-2019, and further increased to 80% by end-2019 . As the team, in combination with LTA data scientists and LTOC operations specialists, continue to work on further improving sensing, data fusion, and analytics for situation assessment and response planning, they expect this percentage to further increase.
They highlighted what has changed in terms of how the LTOC team does its work. “In prior years, we had fewer rail lines and our LTOC staff did not have FASTER. We relied on our rail network operations veteran staff with many years of experience to qualitatively anticipate how incidents would impact the rail network and to assess different options for response. That approach does not work anymore because our rail network is now much more complex due to increases in both the number of rail lines and number of stations, and we will further expand the rail network over the next 20 years making it even more complex. We would not be able to manage either our current situation or our future situation without the aid of the FASTER platform and without the help of Data Science and AI.”
LTA added that with FASTER, “Our younger LTOC staff with just a few years of network operations experience can more accurately and quickly assess the situation, understand the network-wide impacts of the incident and develop response plans.” They note that another benefit of using FASTER is that, “We are now more inclusive as there is sharing of information and collaboration with our rail line operators. The fact that our rail line operators want more access to the information in FASTER is a strong endorsement of its usefulness and value. FASTER provides us a smart and integrated use of data and technology that facilitates LTA in building an agile, lean and inclusive land transport ecosystem that can serve the needs of our commuters and stakeholders better and faster.”
LTA commented on the way FASTER has increased the productive capacity of the LTOC team for rail network operations. “We have substantially increased our rail lines and stations, and we can still do the system-wide monitoring with this same size team of four monitoring officers. In fact, since 2012, the number of stations we are monitoring has nearly doubled yet we have the same lean monitoring team. FASTER has made us much more agile, productive and efficient in our ability to monitor and to manage the response to incidents.”
LTA summed up the benefits of FASTER. “As FASTER’s capabilities and performance have improved over the past several years, we are now more in control because we always know what is happening across the rail network and the other parts of our public transit system, and we know how to respond efficiently and effectively.”
Christine Wee, Meng Fai, Leow and the REAMS system
Even with a system like FASTER, good asset management, including maintenance planning and support, is essential so that LTOC monitoring staff would not be overwhelmed with operational issues resulting from equipment breakdowns due to poor asset management and support.
About a decade ago, in the earlier (and pre-FASTER) days of the LTOC, that was indeed the situation LTA was dealing with. One of the rail operators was faced with challenges in the maintenance of ageing assets, breakdowns were increasing, and the Singapore public, accustomed to a norm of superior service levels for their city services, were quite vocal about their displeasure with increased rail system breakdowns. The government had already announced a plan in 2008 for phasing in a New Rail Financing Framework where all pre-existing rail operating assets like trains, power, signalling systems and station equipment, owned by the two private sector operators would eventually be purchased back by LTA and all ongoing and future capital purchases would be done centrally by LTA as well. By 2018, the private operator rail asset transfers to LTA were completed and this new arrangement greatly accelerated the national priority for a better and smarter approach to rail asset management.
LTA officially announced the award of the contract for the Rail Enterprise Asset Management Systems (REAMS) to Siemens Mobility and ST Engineering in late-2018, and the very first phase of a multi-phase, multi-year production deployment effort started in mid-2020.
Meng Fai and Christine explain that “asset management is much more than maintenance. It’s about a whole interconnected set of concerns: the performance of our total rail system over the total lifecycle of each asset type; resourcing for maintenance where we plan for the levels of labour and capital that will be needed for maintenance work in the next month, quarter, year, and multi-year period; assessing whether we are overusing the assets and driving up system performance at the expense of increasing maintenance costs and shortening asset lifetimes; planning for the capital investment for new train lines and for refurbishing existing train lines in ways that balance overall capital costs with overall maintenance costs and levels of operating performance; determining how we keep assets in good working condition and how we extend the working life of our rail assets, and also supporting real-time troubleshooting and fault-fixing of existing assets.”
LTA notes that with the new REAMS system, “We have started on the journey of using machine learning with all of our asset data to do planning for maintenance needs, maintenance costs, and total lifecycle costs, and also for supporting real-time troubleshoot and repair. Now we have the data, and we have REAMS as the supporting platform to do this, and it is very exciting to see the results.“ LTA is convinced that “we need the AI and ML to be there to help us, as our asset management-related needs are complex and massive. We are on a transformation journey to use the data-driven, AI-enabled REAMS approach to asset total lifecycle management.”
LTA added that the railway data science team that is focused on supporting asset management and for REAMS development has expanded in recent years from four people to 20 people as a result of the combined effect of the asset transfer and the REAMS project. Within this expanded railway asset management data science team, one-third are internal LTA staff and two-thirds are contractors from vendors. This enables the team to build up technical expertise in the relevant analytics and AI skills more quickly than they would be able to do on their own. Over time, this LTA team will take on more applications on the local urban rail systems.
LTA explains this data science team has provided the REAMS system the ability to do maintenance and asset-related support over a varying range of time horizons. “We do reliability predictions for our train fleet over the horizon from months to several years. We do asset health predictions to predict faults over the next seven-day horizon. Once rail equipment or system faults occur, we enable the maintenance technicians on the ground who work for the rail operators to key in the various train alarms and train on-board event status codes received at the operating centre into REAMS. REAMS will then mine past failures and prior causes found in the maintenance records and do fault diagnostics based on associations with the real time alarms and event status information to derive the possible fault causes. This type of support can decrease our fault troubleshooting and repair time. All of these types of maintenance and repair data feeds into the long-term asset planning modules of REAMS, and gives us the data we need to make better asset management predictions for our longer-term asset lifecycle management over the time horizon of one decade and more.”
The team is not at all worried about whether their jobs will ever be fully automated. They said, “We will never finish the exploration and exploitation of our data, no matter how much effort we put into it, as there are so many new opportunities and use cases that can be done. The combination of these new sources of data and the AI/ML based analysis is creating value for us and also creating an enormous amount of new exciting work for us.”
Initially, REAMS was deployed on only one of the six existing rail lines in Singapore. Over subsequent phases, the new AI/ML enabled asset management systems will be used for systems on all of the other existing lines as well as for the new lines that will be built over the next decade. The team emphasizes how time-intensive it is to do the work that they are doing with REAMS. They said, “The process encompasses design, delivery and improvements to make steady progress. It takes a lot of time to build up the data sources and develop these asset management analysis and prediction models for each of the key asset systems within a rail line and then to integrate all of this together for that line. And then we have to do this across all of our current and new rail lines.”
The people on the LTA teams working on expanding and using the FASTER system for rail network operations and the REAMS system for rail asset management will be busy for years and decades to come. These are good examples how AI support systems are enabling people managing city infrastructure and services – in this case urban rail systems– to work in smarter ways and towards the vision of being a smart city.
Artificial Intelligence (AI) has been a buzzword for the last 5–6 years. However, it was at the famous Dartmouth College conference in 1956 that coined the phrase Artificial Intelligence. It is only recently that AI has reached a level of usefulness that has allowed it to gain its wide industry adoption.
Many people are using AI-enabled systems today, whether they know it or not. When you use Google search or watch a movie on Netflix or purchase an item on Amazon, some form of AI-enabled algorithm is at play (we will generally refer to AI to encompass the practices of data science and machine learning (ML) algorithms).
Universities and private learning institutions are rushing out data science, machine learning, and AI courses, encouraged by various government initiatives to embrace AI. These courses train students to learn the different state-of-the-art AI/ML algorithms and graduate them as AI Engineers and Data Scientists at first degree, Masters or PhD levels.
Such programmes are suitable for companies that require these engineers and scientists to build sophisticated algorithms. However, the supply of these algorithms needs to be matched by the consumption of these algorithms.
The person-in-the-street (whom we call “Everyone”) needs to be data-savvy and AI-aware. They need to become intelligent users of such systems, whether to use it as a tool or to integrate it into their personal and professional lives.
About AI Singapore
AI Singapore was announced back in June 2017 with a budget of S$150M to focus on:
● AI Research to drive state-of-the-art AI research in Singapore
● AI Technology to use AI to solve national problems faced by Singapore
● AI Innovation to accelerate the adoption of AI by organisations in Singapore
Fast forward to today, various reports and indices put Singapore in the top 10 for AI:
In terms of research, while Singapore is a small country with a correspondingly small population, our AI researchers are punching above their weight, with Singapore ranked #1 for the Field Weight Research Index (FWCI)
We launched the AI in Health Grand Challenge in June 2018:
“How can AI help primary care teams stop or slow disease progression and complication development in 3 “Highs” (or 3H) — Hyperglycaemia (diabetes), Hyperlipidaemia (high cholesterol), and Hypertension (high blood pressure) patients by 20% in 5 years.”
The challenge is midway through its 5-year programme.
More than 500 organisations have contacted AI Singapore to help them with their business problems where AI could be a potential solution. In the 100-Experiments (100E) programme, we have approved 66 projects worth up to S$500K each. Twenty-nine of these projects have been deployed into production today.
Companies we have worked with include start-ups, SMEs, MNCs and government agencies. The problem statements range from computer vision (CV) to natural language processing (NLP) to tabular-data related ones like fraud detection.
More importantly, AI Singapore has grown the three planned programmes targeting researchers and companies to encompass a broader range of citizens and residents of Singapore that AI would impact.
Besides PhDs in AI, we need Everyone to be aware of AI for the various programmes in AI Singapore to succeed. Everyone refers to business owners, working professionals, students, kids, and even retirees.
Those not developing AI solutions or products are consuming AI applications and hence need to be data-savvy and AI-aware to make intelligent and responsible decisions when interacting with such systems.
AI Apprenticeship Programme (AIAP)®
One of the first challenges we had when we started the 100E programme, was finding sufficient AI Engineers to work on these projects. The shortage of AI talent worldwide is well-documented. However, it is particularly acute in Singapore, where we need to compete with Google, Facebook, Shopee, Lazada and Grab for the same talent, but with a university remuneration package.
We had a hunch that there are citizens interested and have been learning Python and AI/ML on their own for at least one or two years, and it would be easy to train and deep-skill them with their foundations already in place. We only need to deep skill and round off their skills, and then they can start to work on AI projects.
We launched the AI Apprenticeship Programme (AIAP)® in 2018, and to date, with eight batches, we have trained more than 170 citizens to build production-ready AI systems. We grew our team of four AI Engineers to more than 60 today, with more than 90% of them coming through this programme. The rest of the AIAP graduates are working in organisations such as GovTech, DSTA, GIC, SGX, Grab, DBS Bank, Shopee, Facebook, etc.
Interestingly and eye-opening for many hiring managers is that our AI Apprentices come from backgrounds as diverse as accountancy, law, business, social sciences, science and engineering. They did not expect non-computer science students to be able to do AI.
The reason is simple. Working on data does not equate to building algorithms that computer science graduates learn. The excitement one gets from solving a business or engineering problem, a love of data, and a passion for learning can allow anyone to become a good AI/ML engineer.
The strategy worked so well the AI Apprenticeship Programme (AIAP)® even won IDC’s Talent Accelerator Award in 2019!
When speaking to companies to convince them to adopt AI solutions to improve their competitiveness, we found that many managers and professionals were fearful of AI and had many misunderstandings about AI. It did not help that the media often hype about AI world domination and the millions of potential jobs losses caused by AI.
To counter the myth and FUD, we created and launched a 3-hour workshop called AI for Everyone (AI4E)® in August 2018, where we walk participants through what AI is face-to-face, initially twice a month (a Friday afternoon and a Saturday morning). We have since converted the popular AI4E into an online class.
Our original target was 10,000 citizens by July 2021. Today we are at 21,359 and counting, and the new stretched goal for the team is 100,000 citizens by 2025.
We also wanted to get the younger generation excited about STEM. Parent-volunteers are trained and certified to deliver the AI for Kids (AI4K)® programme. This allows us to get the kids, parents and even grandparents trained on AI and become data-savvy and AI Aware.
To further reinforce and make learning AI fun, we created an AI cartoon character competition to get children from primary school to imagine and draw an AI cartoon character that will become the main character of an AI storybook. The competition gets both the children and parents to start thinking about AI, not fear AI.
Getting Everyone to be AI Aware is what we call the Generational AI Capability Development initiative. We want to ensure that all strata of society — age, gender or economic status — get exposed to AI, made aware of the opportunities that AI brings, and become savvy users of AI. For those who eventually become AI developers, we also hope that they will be responsible and ethical.
As AI Singapore progresses into its Phase 2 and Phase 3 work, we continue to innovate our programmes to cater to both organisations and individuals.
Singapore needs 8,000–10,000 AI Engineers over the next five years. AI Singapore cannot do this alone, and neither can the universities.
As such, we have operationalised our core programmes such as 100E, AIAP®, AI4E®, AI4I® into a comprehensive AI for Transformation (AI4T) programme. AI4T allows organisations to implement the exact same programmes we have done at a national scale to an organisational scale.
We developed the AI Readiness Index (AIRI) to help organisations identify where they are in their AI maturity journey and what intervention or AI Singapore programmes are suitable.
To operationalise the AI4T programme, we will train organisations and provide them with the AI training materials, administrative and operations handbooks to guide them.
We have shared our programmes with governments, universities, associations and non-profit organisations in ASEAN, the EU and Middle-east and US. Many exciting discussions and collaborations are underway.
Let’s work together to make Everyone AI Aware and data-savvy.
AI Bricks are a collection of open source AI products and tools built based on industry needs and feedback. They provide a set of reusable components or tools which can be configured to solve common business problems. This lowers the barrier to AI adoption and allows organisations to integrate AI into their workflows more rapidly. Various engineering teams in AI Singapore are hard at work designing and implementing these AI Bricks across different fields:
Robotic Process Automation (RPA)
Computer Vision (CV)
Natural Language Processing (NLP)
Robotic Process Automation (RPA)
Robotic Process Automation (RPA) is a form of business automation where well defined sets of human-computer interactions are programmed to be executed by software known as robots. TagUI, first released in 2017, is a free RPA tool supported by AI Singapore which is easy to use and works on Windows, Mac and Linux.
Users of TagUI include working professionals in organisations such as EY, Accenture, ECCO Shoes, Mercedes-Benz, Bank of Brazil, Ministry of Education, as well as students in various higher education institutions working on their Masters or PhD projects.
TagUI comes in different flavours to suit users of different backgrounds and tastes. There is a human language version that works with over 20 human languages, a Python version, a C# version, an upcoming Java version, and a Microsoft Word version to create RPA robots easily in your favourite editor, language and OS. Due to its flexible yet stable architecture, most of the different flavours of TagUI are created by the community for their own communities.
Get started with RPA using TagUI by visiting the TagUI homepage or joining the Telegram chat.
Machine learning often requires substantial amounts of data. This data is often distributed across different parties. In theory, if the data could be shared, this would lead to better machine learning models. In practice, however, data privacy concerns often means that simple sharing is not a viable option. For example, this is the case in the finance and healthcare industries.
Federated Learning is a technique that enables a machine learning model to be trained using data held by different parties without the need for the data to leave where it is held. Instead, models are trained locally and only the changes in models are shared. The privacy preserving manner of model training involves the coordination by a trusted third party in order to execute.
Synergos is a platform that houses the trusted third party and the necessary infrastructure around it. The goal is to make Federated Learning user-friendly and easily adaptable across industries. The team is working towards the launch of Synergos in the second half of the year.
In the meantime, learn more about the work done here.
Computer Vision (CV)
AI models in Computer Vision (CV) have come a long way, with object detection, segmentation and pose estimation from images now a reality. For example, the team worked with HP at the start of the COVID-19 pandemic to rapidly deploy a solution to monitor social distancing on the factory floor by estimating human skeletal points together with distance projection heuristics. Work is also in progress with other clients on use cases such as footfall analytics, physical exercise feedback, just to name a few.
Also currently in development is a modular CV inference framework, that is flexible and can be configured to suit different CV applications. It will contain carefully curated models, use cases, and comes with configurable input/outputs. With this, users will be able to create a POC very quickly, by selecting the appropriate modules for their application. All this coming soon in the second half of the year.
Natural Language Processing (NLP)
Combining rich fundamentals and research with modern computing power and model architectures, the field of Natural Language Processing (NLP) has seen great developments in the last decade in achieving new heights in language understanding and end applications.
The team is involved in several projects to build products not yet available, with the goal of making them accessible to anyone or any company wishing to utilise the power of language processing. In addition, local research and regional Southeast Asian languages are put at the forefront of tool development. Some of the projects expected to come to fruition in the second half of the year include:
SEACoreNLP – aims to promote the development of NLP in Southeast Asia and be the central hub for it. The current focus is on “core” NLP – tasks such as part-of-speech tagging, syntactic parsing or semantic role labeling etc. for Southeast Asian languages.
SG-NLP – aims to bridge the gap between the industry and Singapore-based research groups to accelerate the growth of applied and translational research in NLP.
SenseMaker – tackles the problem of text annotation using weak supervision and active learning approaches.
Hasky – Information Retrieval and Question Answering system builder that uses self supervised training techniques to significantly reduce the need for text annotation.
Beagle – developed for Information Retrieval with an emphasis on gradual data accumulation while monitoring model development and controlling “data drift”. More information here.
AI Singapore has launched a refreshed learning portal LearnAI ! This is a dedicated site to help you in your AI and Data Science learning regardless whether you are a student, professional or an organisation.
With the launch of our new portal, we are excited to announce a PREMIUM Annual subscription membership. This membership will allow you to access premium courses like our popular AI for Industry (AI4I)® programme with. In addition, you will get access to new premium courses at no additional cost. This membership also comes with a one-year DataCamp Premium access.
For our FREE membership subscription, you can continue to access courses like AI for Everyone (AI4E)®, AI for Kids (AI4K)® as well as content from our partners. In addition, we will also be offering new courses as well. Recently, we have released a new course Data Analytics for Everyone. If you always wanted to learn how to use data to answer questions and make better decisions, this is the course for you.
Our portal has been redesigned with an intuitive interface that will help you access the right course according to your need.
If you are looking to learn some general knowledge about AI, head on to Everyone.
Students can access AI4S and AI4K courses while Educators can request free 6-month Datacamp access for their students.
For working professionals looking to learn and upskill in Data Science and Machine Learning, you can look at all our courses under the Professional section.
In tandem with the rise in COVID-19 cases in the community, the Singapore Government has been progressively tightening restrictions on the group sizes of social gatherings. Group size limits were reduced from 8 to 5 on May 8 (reversion to Phase 2), and further from 5 to 2 on May 16 (Phase 2 – Heightened Alert). The CV Hub team at AI Singapore has developed the Group Size Checker tool since the start of the pandemic. The tool is able to read in live video feeds or recorded videos and check if the group size limit has been violated.
Recognising that COVID-19 will be with us for some time to come, with periodic tightening and loosening of social gathering limits, the team made sure that the “group size limit” can be easily configured according to the requirement of the day.
For example, the animated GIF below shows a detection when the group size limit is set at 5, while the second shows a detection at a group size limit of 2. The tool was developed to be flexible, and can be adapted to trigger further actions, such as taking a snapshot of the detection, or sending an alert to an API.
The Annual ICLQ (International & Comparative Law Quarterly) Lecture this year was based on the paper ‘Artificial Intelligence and the Limits of Legal Personality’ by Professor Simon Chesterman, Dean of the Law Faculty of the National University of Singapore and Senior Director of AI Governance, AI Singapore.
The article examines the debate whether artificial intelligence systems should be granted some form of legal personality as they increase in sophistication, and explores the fact that while most legal systems could develop a novel category of legal persons, such arguments are insufficient to show that they should.
Singapore’s AI Governance Conference 2021 brings together industry thought leaders, policymakers and experts from the East and West to discuss leading technological trends and their implications for businesses and policy making, and provide insights on the value of AI governance to industry. Attendees can also learn about the successes of implementations of Trustworthy AI in the real world through presentations and demonstrations from participating global and local companies.
Venue: Sands Expo and Convention Centre, Marina Bay Sands Singapore
Semiconductor chip placement is a critical step in chip design, but it is also a laborious task that can take up to weeks to complete.
FPGA (field-programmable gate array) and ASIC (application-specific integrated circuit) chips, which are widely used in consumer electronics and enterprise systems, can consist of hundreds to millions of transistors depending on their size and complexity. Before they can be manufactured, different modules have to be carefully arranged on the chip.
Given the complexity of today’s semiconductor products, this requires the expertise of highly-skilled and experienced engineers. When carrying out chip placement, the engineers work with rectangular grids, each of which contains parts of one or more modules. Poor chip placement can result in a product that does not meet performance specifications, leading to costly delays in production.
To address these challenges, chip design optimisation company Plunify decided to leverage artificial intelligence (AI) in its quest to help businesses and organisations build better semiconductor products.
Started in 2009 by two passionate engineers, Plunify seeks to improve design performance and save time and resources in the design process.
Working with AI Singapore under the 100 Experiments (100E) programme, it explored the use of machine learning to assist human experts and speed up the chip placement process. Computationally-efficient generative models were used to generate chip placements. Convolutional neural networks and other deep learning architectures were then applied to improve the quality of chip placement by predicting good placements for production.
The outcomes of the project were amazing. A ground-breaking 80 percent accuracy was achieved in applying image recognition to chip placement. This enabled Plunify’s chip design partners to slash the development cycle for products ranging from automotive power chips to 5G communications chips from 2 months to a week.
The prototyping cost for new chips was also reduced by 8 per cent, and new chip designs could be developed 10 times faster.
To find out more about our 100E Programme, please click here.
“The team from AISG provided much-needed expertise and diligence to help us design and craft the data pipeline and models for this effort. This was vital to the success of the project.”