Get SGD58.85 off your premium account! Valid till 9 August 2021. Use the Code ‘SGLEARN2021’ upon checkout. Click Here

Machine Learning During a Crisis

Given the COVID-19 pandemic, I have more time to share my thoughts these days. So I made a call to my LinkedIn community to find out what they would like me to write on. One interesting topic is “Launching machine learning models in difficult times.”

Context

In many companies that are mature in data science, they generally make a very huge assumption and that is “The future will look VERY similar to the past”. That is why we use past labeled data to build our supervised machine learning model. To gain confidence that the models trained can be used in the future, we do cross-validation (a.k.a out-of-sample validation) and out-of-time validation.

Unfortunately, training and testing of machine learning models take time and if your chosen machine learning model needs to undergo an external audit, that will take even longer. Chances are you may be faced with a situation where the world has taken a huge downturn rapidly, for instance in the current pandemic or the Financial Crisis of 2008. This means the machine learning model that you have built, which is trained to understand the behavior during normal circumstances, now needs to be used in circumstances that are completely different!

So much effort is spent to train and finalize the model! (Noooo!) Secondly, next to zero amount of data is collected on consumer behavior in a crisis and training a new model will take time, which is a scarce resource in a crisis.

Now what?

Consider this, does the consumer behavior change very drastically in the time of crisis? For instance, assume that you are a supermarket owner with only brick-and-mortar as a distribution channel. In a pandemic, consumer behavior may not change drastically since grocery shopping is still needed. If you expect that consumer behavior gets moderated and not change drastically, in my opinion, the model can still be launched.

Manage the Cut-Offs

For models to make better decisions, you might now look at the cut-off points. For instance, banks have credit scorecards, machine learning models that determine credit risk. The higher the score, the lower the credit risk.

Let us say the model was built during an economic boom, thus the model learned about the credit card consumer’s behavior in a boom time. In normal circumstances, the bank will take action on the consumer when it is below a cut-off of 78 (out of the highest attainable score of 156). The cut-off is a strategy parameter that it takes to manage default risk in the boom time.

When the model is launched in an economic bust, consumers have a higher chance of default across the entire group, the bank may now consider increasing the cut-off from 78 to, say 120 for example. This means, the bank will take action to mitigate risk for any consumer scoring less than 120 now.

What I am proposing is, go back to your model and think about the decision cut-offs. If you are using a classification model, perhaps manage the threshold to mitigate the situation. If previously you found out that the threshold 54% is ideal – i.e. getting the best precision or recall – it might be time to re-look again and adjust the threshold.

Evaluate Constantly

When implementing the models during a crisis, what is very important is to learn how your selected model performs in a crisis. Constantly evaluate your model, see if it is working as per normal or worse, and also how much worse. If your organization is mature, you can always go back to your documentation to extract the baseline results. Constantly evaluate and compare the actual results with the baseline results. The difference will help you better manage the cut-offs (mentioned above).

Collecting More Data

Such “opportunities” are rare so another suggestion is to plan your data collection strategy. See if more data can be collected during such times to better understand consumer behavior, which can reap benefits later on when refreshing (re-train or re-calibrate) the model.

Conclusion

It is quite unfortunate that, after all the hard work put into training up a model and getting ready for implementation, the tides have changed. But we can also see it as an opportunity to learn more about our business, to learn more about our markets, business model, consumer behavior and so on. So besides mitigating the risk that the model might be wrong (cut-offs and constant evaluation), we should also strategize to take advantage of the situation as well (collecting more variables and data points)

I had no idea how to write code two years ago … Now I’m an AI engineer

A career switch as Elsa said, is a journey “into the unknown”

Two years ago, I graduated from college where I studied Economics and Finance. I was all set for a career in finance. Investment Banking and Global Markets — those were the dream jobs. 9 months before graduation, I snagged a role at an investment bank, feeling proud because it was typically hard to get a role if one hadn’t interned at that bank before.

Months into the job, I picked up some Excel VBA and learnt how to use Tableau, Power BI and UiPath (a Robotics Process Automation software). I realized I was more interested in picking up these tools and learning to code rather than learning about banking products. Banking products that once intrigued me due to their complexity, is now seen merely as a way for banks to make profits off clients.

The banking environment is one that challenges my personal values greatly, but that’s a topic for another day.

At the same time, a colleague I worked with opened my eyes to the world of ‘Machine Learning’. I was intrigued by the fact that one could “predict” certain outcomes from certain inputs.

I was instantly hooked.

There was a problem though – my coding skills were elementary. In my dictionary, Python was a type of snake and Pig was well… a pig.

Fast forward two years later, here I am about to enter the AI industry to become an AI engineer. The journey wasn’t easy, nor was it short. To me, a transition to the AI industry is merely the beginning — a new beginning for me to learn and grow. Here’s what my journey was like.

Disclaimer

Everyone’s journey to data science is different. This article is not about “How to break into AI” and should not be viewed as a step-by-step guide. It is a personal anecdote that I hope will inspire people to take that leap of faith to do what they want, because life is too short to be living a life void of meaning.

My Journey

Jumping on the MOOC Hype Train

With an Economics and Finance background, I didn’t know how to code. Excel VBA is pretty close to coding, but that’s about it. Being an overachiever and wanting to jump onto the data science bandwagon, I signed up for a handful of Massive Open Online Courses (MOOCs). Not one, but a handful. Here’s the list of courses that I signed up for:

  • Python BootCamp: Go from zero to hero in Python 3 [Udemy]
  • Python for Data Science and Machine Learning Bootcamp [Udemy]
  • Managing Big Data with MySQL [Coursera]
  • Java Tutorial for Beginners [Udemy]
  • The Web Developer Bootcamp [Udemy]
  • Machine Learning A-Z: Hands-On Python & R in Data Science [Udemy]
  • Deploy Machine Learning & NLP Models with Docker [Udemy]

Needless to say, I didn’t complete most of them (only the ones in bold). I was stuck in what I call a MOOC self perpetuating cycle. The ease of access to knowledge made it natural to go from one course to the next, often not completing the prior course due to the brevity of the topics covered which made my interest fleeting.

To me, that was the biggest downside of MOOCs — the brevity of the content covered. Or perhaps my initial expectation that MOOCs could propel me to a career in data science was simply too naive.

To give perspective, a typical MOOC that teaches traditional Machine Learning (ML) methods often skims the surface of what the models actually do. Random forests are taught as an aggregation of decision trees, but how decision trees determine which features are selected at which branch (i.e. concept and mathematics of entropy) will not be covered. Support Vector Machines are simply taught as a classification method, but how the hyperplane is determined will not be covered.

This discrepancy between “What I know” versus “What I need to know” is exemplified when we look at more advanced areas of AI such as Deep Learning. MOOCs that teach Deep Learning often throw you a bunch of code in Tensorflow on a nice dataset like MNIST and conclude that you are now a Deep Learning expert (I exaggerate a little but you get the gist). This is clearly far from reality as research papers often include complex architectures that involves understanding of feature extraction in deep neural net models, and other more complex features like transformers and bi-directional coding. Understanding why some state-of-the-art models work better than others is important as well, alongside concepts like transfer learning and meta learning.

In my opinion, MOOCs that taught ML often give people the false impression that anyone can be a ML practitioner. To the naive, ML is simply a few lines of code that involve .fit() and .predict(), and that is because MOOCs represent it in such a way that one can get started with ML with relative ease (and perhaps monetizing these courses were so lucrative due to the massive hype around ML).

Don’t get me wrong — MOOCs are great. They provide a quick and easy way for people to gain knowledge and get started on a certain topic. Will they make you an expert? No. What you do after finishing the course, will determine whether or not you become an expert.

Getting my hands dirty

After completing several MOOCs, I knew I was getting nowhere. Sure, I had some basic skills in Python and knew how to use sci-kit learn in a .fit() and .predict() sense, but that was about it.

To brush up my coding skills, I practiced on HackerRank and completed questions related to SQL and Python. At the same time, I wanted to have a real-life project where I could utilize Python. This was when I started working on a bot that could book badminton courts for me. This mainly involves using Selenium to interact with the browser to navigate through the web pages and eventually cart out and pay for the badminton court (this is similar to sneaker shoe bots). The motivation was that badminton courts in Singapore are usually fully booked two weeks in advance and many people often camp on the booking sites everyday on the release time (they usually get booked within a second or two).

Even though I was somewhat more confident about writing code in Python, I knew nothing about code efficiency. Time and space complexity were completely foreign to me. Object-oriented programming was a concept in my head that never got its moment to shine (not to mention eventually forgotten).

On the ML side, I was a Jupyter notebook expert. I could change the theme of my Jupyter notebook to ‘dark mode’ and use all the keyboard shortcuts. Clearly, I was ready to take on the role of a Data Scientist.

Reality check — I failed interviews miserably. Technical tests involving code got both my feet out the door faster than I can say “data science”. A role which I applied to as a technology analyst got me referred to another department as they felt I would be better as a business analyst.

I was nowhere close to where I needed to be.

Dirty but not dirty enough

To gain that depth in ML and hone my skills in Python, I decided to take up a Masters of IT in Business (with a specialization in AI) at Singapore Management University.

I learnt about the mathematics behind the traditional ML models and applied state-of-the-art deep learning architectures on a self-curated dataset. I learnt about important concepts about AI including common search algorithms, Q-learning and deep Q-learning. I learnt about algorithm design, including graph algorithms, time and space complexity, name-matching algorithms and many more algorithms that almost literally tore my non-computer-science brain apart. In essence, the course provided me the academic rigour that MOOCs lacked.

At this point in time, I had a few projects under my belt from my Masters. They weren’t full fledged projects as the datasets were often provided or obtained from Kaggle, and they often culminated in a Jupyter notebook. Deep Learning models were run on Docker for consistency but there was never an aspect of deployment in mind. They were, after all, school projects.

In my opinion, a Masters provides the academic rigour that is necessary for AI professionals, but lacks the aspect of real-world application. A Masters course wouldn’t tell you what is necessary to land a job in data science — you’d have to figure that out on your own. Software engineering and DevOps skills are often required as part of a Data Scientist’s job scope (though not extensive). Collaboration of code is also important in large organizations. As such, knowing how to setup a Docker environment, launch an AWS EC2 instance, host a dataset on Azure blob storage, organize code efficiently and using GitHub or GitLab for version control are some of the crucial skills needed but aren’t taught in the classroom.

Try, even when you think you’re not good enough

I continued interviewing, raking up tons of experience in technical interviews and non-technical interviews (though flunking most of them). It also allowed me to know what I didn’t know and took time to pick up those skills. More importantly, it allowed me to understand the different types of job descriptions that companies put up for the same role and how that corresponded to a company’s maturity in AI adoption.

Two years later, I landed a role to train as an AI Engineer. To me, it is a great opportunity for me to learn and grow in a field that I am passionate about. More than that, it is a proof that anyone can accomplish what they set out to do, though some may take longer time than others.

At the end of the day, a career is a marathon, not a sprint. Do what you love, because you will be spending a huge proportion of your life working.

If you’re feeling lost, remember what Elsa said and “do the next right thing”.

Originally published here.

David made it into the AI Apprenticeship Programme. You can read more about preparing for the programme here.

Introducing FinePose for Better Social Distancing

(By Dr Daniel Ng, Edmund Teo, Jway Jin Jun, Kow Yong Sheng, Sun Yikang, Tan Juan Boon)

To support the fight against COVID-19, AI Singapore has developed a social distancing application, FinePose, to encourage individuals to maintain physical distance from each other. While there are other solutions out there that are based on object detection, this initiative is unique as it is an offshoot from AI Singapore’s foray into human pose estimation. A new product for human pose will also be launched within the next few months.

The AI Behind It

There are two main components to obtain the distance between individuals. These include human pose estimation and distance projection heuristics.

We used an open source human pose estimation model known as PoseNet to identify key human skeletal points. This allows the application to identify where individuals are located within the video feed. The coordinates of the various skeletal points will then be used to determine the distance between individuals.

To measure the distance between individuals, we have to convert the keypoints in 2D coordinates to keypoints in 3D world coordinates. To achieve this, it is possible to estimate the depth (Z) from the XY coordinates. As such, the following heuristics are applied:

Using the similar triangle rule, we are able to compute Z.

Where:

Z = depth or distance of scene point from camera

f = focal length of camera

y = y position of image point

Y = y position of scene point

A reference or “ground truth length” is required to obtain the depth if specific calibration for every camera is to be avoided. After numerous experiments, it was decided that the optimal reference length would be the average height of a human torso (height from human hip to center of face). Width was not used as this value has high variance due to the different body angles of an individual while facing the camera.

Deployment on the Edge

To ensure privacy, the video feed is neither recorded nor stored. The inference is conducted in real-time with all the AI models deployed on the edge. This is unlike most video analytics solutions which usually process the videos on the cloud. One drawback of edge deployment is the use of non-ideal compute resources which may cause poor inference performance. Thus, FinePose was developed to be as lightweight as possible. This allows for flexible deployment across device types. You may access the FinePose mobile application here.

With the increasing number of COVID-19 infections worldwide, we hope that FinePose will be able to contribute to the fight against COVID-19. We believe that the collective actions of everyone will help us overcome this outbreak together.

Note: As this application reflects an approximate measure of physical distance, it should not be used as a substitute for healthcare guidelines.

You may contact the team at products@aisingapore.org.

Introducing FinePose for Better Social Distancing

(By Dr Daniel Ng, Edmund Teo, Jway Jin Jun, Kow Yong Sheng, Sun Yikang, Tan Juan Boon)

To support the fight against COVID-19, AI Singapore has developed a social distancing application, FinePose, to encourage individuals to maintain physical distance from each other. While there are other solutions out there that are based on object detection, this initiative is unique as it is an offshoot from AI Singapore’s foray into human pose estimation. A new product for human pose will also be launched within the next few months.

The AI Behind It

There are two main components to obtain the distance between individuals. These include human pose estimation and distance projection heuristics.

We used an open source human pose estimation model known as PoseNet to identify key human skeletal points. This allows the application to identify where individuals are located within the video feed. The coordinates of the various skeletal points will then be used to determine the distance between individuals.

To measure the distance between individuals, we have to convert the keypoints in 2D coordinates to keypoints in 3D world coordinates. To achieve this, it is possible to estimate the depth (Z) from the XY coordinates. As such, the following heuristics are applied:

Using the similar triangle rule, we are able to compute Z.

Where:

Z = depth or distance of scene point from camera

f = focal length of camera

y = y position of image point

Y = y position of scene point

A reference or “ground truth length” is required to obtain the depth if specific calibration for every camera is to be avoided. After numerous experiments, it was decided that the optimal reference length would be the average height of a human torso (height from human hip to center of face). Width was not used as this value has high variance due to the different body angles of an individual while facing the camera.

Deployment on the Edge

To ensure privacy, the video feed is neither recorded nor stored. The inference is conducted in real-time with all the AI models deployed on the edge. This is unlike most video analytics solutions which usually process the videos on the cloud. One drawback of edge deployment is the use of non-ideal compute resources which may cause poor inference performance. Thus, FinePose was developed to be as lightweight as possible. This allows for flexible deployment across device types. You may access the FinePose mobile application here.

With the increasing number of COVID-19 infections worldwide, we hope that FinePose will be able to contribute to the fight against COVID-19. We believe that the collective actions of everyone will help us overcome this outbreak together.

Note: As this application reflects an approximate measure of physical distance, it should not be used as a substitute for healthcare guidelines.

You may contact the team at products@aisingapore.org.

Train a Viable Model in 45 minutes for AWS DeepRacer Beginner Challenge Virtual Community Race 2020

(By Juv Chan, republished with permission)

AWS DeepRacer is an integrated learning system for users of all levels to learn and explore reinforcement learning and to experiment and build autonomous driving applications. This is the first AWS DeepRacer virtual community race dedicated for AWS DeepRacer beginners. This blog post serves to provide the essential comprehensive guides and resources to help beginners get started with this community race.

You will get the opportunity to learn and get hands-on with reinforcement learning, Python programming, AWS cloud services and more as you join and compete in the community race. To encourage more active participation from the new racers community, the top 50 racers in this community race will be rewarded with AWS credits. The winner will win $1,000 AWS credits. Refer here for more prize details. Terms and conditions apply.

1. Join the Race

First of all, sign into your AWS account and join the AWS DeepRacer Beginner Challenge virtual community race from the race invitation link below:

https://console.aws.amazon.com/deepracer/home#raceToken/6JgHlXfoSRaEMu1OXW5RZA

If you don’t have an AWS account, visit aws.amazon.com and choose Create an AWS Account. For detailed instructions, see Create and Activate an AWS Account. After you have created an AWS account, see Set Up Account Resources for AWS DeepRacer to set up the required resources for AWS DeepRacer.

If you are using an AWS Educate Starter Account or AWS Educate Classroom, we are working on it and will provide an update in due course.

2. Race Home

When you click on the race invitation link above, you will come to this page as shown below. You will need to create and train a model for the 2019 DeepRacer Championship Cup track before you can submit the model for this challenge.

Before you submit a model which can complete at least one full lap around the track, your lap time will not appear in the race leaderboard.

Click on the AWS DeepRacer link at the top to go to the DeepRacer console home to create and train your model.

3. Get Started with DeepRacer

The AWS DeepRacer console is the platform to get started with to create, train, evaluate and submit your models for the community race. Note that AWS DeepRacer is only available in the US East (N. Virginia) region (us-east-1) currently. Click on Get Started to get started with reinforcement learning.

4. Get Started with Reinforcement Learning

Reinforcement Learning (RL) is the core which powers the autonomous driving, object avoidance as well as head-to-head racing in AWS DeepRacer.

Step 1: Learn the basics of reinforcement learning

Click on Start learning RL to open a new page in a new browser tab which provides a detailed, interactive and animated way of learning about the RL basics, how it applies to AWS DeepRacer, RL models training as well as the parameters of reward functions.

This is an essential guide for anyone who is new to RL and RL in AWS DeepRacer and useful for anyone who wish to revise and reinforce the concepts as well.

Step 2: Create a model and race (Required)

Click on Create Model to begin your journey to build, train and evaluate models for submission to the community race.

Step 3: Learn about sensors and new type of racing

You can learn about the different type of sensors and new type of racing. For this community race, the racing type is Time Trial and the sensor configuration of Front-facing single-lens camera only is sufficient.

5. Create and Train Your First Model in 45 Minutes

Step 1: Specify the model name and environment

Under Account Resources, make sure you have both valid IAM roles and valid AWS DeepRacer resources stack.

Under Training details, you’re required to name your model.

It is recommended you provide a model name which is distinctive and meaningful to avoid confusion when you submit the model to a race. For example, model name = reInvent-2019–45mins-v1

It is also recommended to provide a description to each of the model you create to take note of important information or details about the model.

Under Environment simulation, choose The 2019 DeepRacer Championship Cup track to train your model on for this community race. Click Next at the bottom of the page to proceed.

Step 2: Choose training type and agent

Choose Time trial for the race type as the other race types are not supported for this community race.

For the Agent, choose the Original DeepRacer, which is the default vehicle in your AWS DeepRacer garage to get started with. You cannot modify the configuration for this default vehicle.

When you have become more familiar and want to try more advanced features, you can go to the AWS DeepRacer Garage to build a new vehicle that is customized to your needs and preferences on Sensor(s), Neural Network topologies, Action space, Vehicle trim and name.

Step 3: Customize reward function and training algorithm

You will need to write a reward function in Python 3.x programming language to define the reward strategy which guide the vehicle to move along the track to the destination as quickly as it could for time trial race type.

To build and train a minimum viable model for this community race in 45 minutes, the proposed reward strategies are a combination of Follow the Center Line, Stay Inside the Two Borders and Prevent Zig-Zag.

You can find the example codes for the three separate strategy above at the DeepRacer Reward Function Examples link below:

https://docs.aws.amazon.com/deepracer/latest/developerguide/deepracer-reward-function-examples.html

Combine the example codes from the three examples above into a single reward function as shown below:

def reward_function(params):
 ''''
 Example of rewarding the agent to follow center line
 ''''
 
 # Read input parameters
 all_wheels_on_track = params[‘all_wheels_on_track’]
 track_width = params[‘track_width’]
 distance_from_center = params[‘distance_from_center’]
 steering = abs(params[‘steering_angle’]) # Only need the absolute steering angle
 
 # Calculate 3 markers that are at varying distances away from the center line
 marker_1 = 0.1 * track_width
 marker_2 = 0.25 * track_width
 marker_3 = 0.5 * track_width
 
 # Give a very low reward by default
 reward = 1e-3
# Give a high reward if no wheels go off the track and
 # the agent is somewhere in between the track borders
 if all_wheels_on_track and (0.5*track_width — distance_from_center) >= 0.05:
 reward = 1.0
 
 # Give higher reward if the car is closer to center line and vice versa
 if distance_from_center <= marker_1:
 reward = 1.0
 elif distance_from_center <= marker_2:
 reward = 0.5
 elif distance_from_center <= marker_3:
 reward = 0.1
 else:
 reward = 1e-3 # likely crashed/ close to off track
 
 # Steering penality threshold, change the number based on your action space setting
 ABS_STEERING_THRESHOLD = 15
# Penalize reward if the agent is steering too much
 if steering > ABS_STEERING_THRESHOLD:
 reward *= 0.8
 
 return float(reward)

Set the maximum time to 45 to set the training maximum time to 45 minutes for your first model, with the recommended reward function, training algorithm and hyperparameters as above.

Click Create Model to start the model training process. During the model training process, you can monitor the reward graph and simulated video stream on the training progress. They are important training metrics which could give you a high level overview of whether your reward function is on the right and expected direction.

6. Evaluate Your Model

You can evaluate your model after the training is completed. It is also important to review your reward graph before starting the model evaluation.

For Average Reward, ideally it should increase over iterations until it reaches a point where it steadily remains or fluctuates slightly, which is a indicator that the model might have converged.

For Average percentage completion (Training & Evaluation), ideally they should also increase over iterations until they reach 100% and remain the same thereafter.

To begin evaluating your model, click Start new evaluation.

Choose The 2019 DeepRacer Championship Cup track, Time trial race type and 5 trials for number of trials.

Remember, do not check on any checkbox for the Virtual race submission.

Click Start evaluation to start. The evaluation process could take some time to complete depending on how fast your model runs around the track during evaluation.

After the evaluation is completed, if the number of successful trials is at least 3, you can submit the model to the community race for evaluations.

For the sample model built in this guide, it is able to complete all 5 evaluation trials successfully as shown below:

7. Submit Your Model

Once your model evaluation is completed and you are satisfied with the evaluation results, you are ready to submit your model to the Beginner Challenge Community Race for evaluation and entry into the race’s leaderboard.

Go to the race’s invitation link as below, choose the model which you have trained for this community race and click Submit model.

https://console.aws.amazon.com/deepracer/home#raceToken/6JgHlXfoSRaEMu1OXW5RZA

After you have submitted your model, you will go to the Beginner Challenge community race home. In the status section at the left, you should see the status change from Awaiting evaluation to Under evaluation. The evaluation process here should take a couple of minutes or more to complete.

When the evaluation is completed, you should see the status become Completed 1 lap and your rank and lap time should appear in the race’s leaderboard at the right. You can click on the Watch video link below Status to watch the video of your vehicle running around the race track based on your trained model.

If your model evaluation is successful, you should see your own racer name appear in the community race leaderboard as shown below.

If your model submission is unsuccessful as shown below, do review the evaluation logs for further root cause analysis and troubleshooting. If you are still having the same issue, feel free to reach out to the AWS Machine Learning and DeepRacer community. Refer to section 9: Ask the Community in the later part of this guidefor more details.

8. Cost Management

Please kindly refer to AWS DeepRacing Pricing page below to understand its Free Tier and pricing examples.

https://aws.amazon.com/deepracer/pricing/ (see Notes below for further information)

Alternatively, you can also look for Local Training option to setup AWS DeepRacer training and simulation on your local machine or any cloud virtual machine instance e.g. Amazon EC2 which can lower your cost on AWS DeepRacer resources. Kindly note that the AWS DeepRacer local training project is contributed by the AWS DeepRacer/Machine Learning community.

9. Ask the Community

We welcome and encourage you to ask questions about this community race, AWS DeepRacer as well as AWS Machine Learning on the AWS Machine Learning Slack Community. There are many AWS DeepRacer and Machine Learning experts from the community or AWS who are passionate about sharing their knowledge and experience with other community members.

Please join the AWS Machine Learning Slack Community from this link and post your relevant questions on the relevant channels (e.g. #aws-deepracer, #dr-local-training-discussion) there.

There are other useful and public resources relevant to AWS DeepRacer and machine learning which you can find in the sections below.

Lastly, happy learning, racing and sharing! #AWSDeepRacer

10. Free Digital Training & Useful Resources

11. GitHub Repositories

Notes

For a limited time until May 31, 2020, you can train and evaluate your AWS DeepRacer models for free! (Link)

This article is a reproduction of the original by Juv Chan. Juv is the AWS DeepRacer League Singapore 2019 winner.

Top image from AWSonAir

Tempcheck – A Temperature Declaration and Tracking service

To support Government’s effort to encourage citizens to check their temperature regularly, AI Singapore has also developed an all-in-one temperature declaration and tracking mobile app to help organisations track their staff’s temperature.

Called “Tempcheck”, the app allows staff to record their temperature and location (which is important for LOA and SHN tracking purposes) easily with their mobile phone and a thermometer.

The data captured by the app may eventually be used to train an AI model to predict potential clusters of outbreak in the future, when sufficient numbers of people and organizations use TempCheck.

To find out more about Tempcheck, click here.

AI Singapore to Boost Digital Skills Training During the COVID-19 Pandemic

5 May 2020 – During this time of economic uncertainty, AI Singapore (AISG) is introducing a series of initiatives to encourage employees and companies to upgrade their digital skills. This will prepare them to seize opportunities in the digital economy once the pandemic is over.

First, AISG will offer up to 2,800 individuals from companies affected by the COVID-19 situation, the opportunity to upskill, reskill and deep-skill in digital technology. This will be done by expanding the intake for two of its existing programmes – AI for Industry (AI4I)TM and the AI Apprenticeship Programme (AIAP)TM. AISG has also recently introduced a new programme in data curation for AI solutions, the AI Data Apprenticeship Programme (AIDP). Finally, AISG will provide additional incentives for companies to adopt AI to increase their competitiveness.

“The COVID-19 pandemic presents us with an unprecedented challenge as a nation. AI Singapore is doing its part to get us through this challenge by offering development and growth opportunities to individuals and companies. We invite Singaporeans to participate in the three education and apprenticeship programmes we are offering (AI4ITM, AIAPTM and AIDP). We will also support companies that want to invest in and adopt innovative AI solutions to better prepare themselves for a post COVID-19 world,” said Professor Ho Teck Hua, Executive Chairman of AI Singapore.

The AIAPTM and AIDP trainings are supported by Workforce Singapore’s (WSG) SGUnited Traineeships Programme.

AISG’s COVID-19 initiatives will start on 1 May 2020 and will end on 31 December 2020.

Training opportunities for 2,800 Singaporean/SPR professionals  

The manpower training programmes under AISG’s COVID-19 initiatives are:

  • AI4ITM: This is a fully online programme to help learners gain proficiency in Python and be able to program basic AI and data applications. AISG will train 2,500 Singapore Citizens and Singapore Permanent Residents (SPRs) at a special course fee of S$50 during this period. This rate is extended to the first 2,500 eligible applicants only. For more information, please visit https://aisingapore.org/ai4i.

  • AIAPTM: This is a 9-month, full-time, deep skilling programme to groom local AI talent. Apprentices will work on end-to-end, real-life, industry projects, from scoping problem statements to deploying trained AI models. The programme comprises 2 months of AI coursework on leading AI techniques and applications, followed by 7 months of on-the-job training on an industry AI project. There is no application fee and AIAPTM apprentices will receive a monthly stipend. In the next one year, AISG will train 200 Singaporeans who are individuals or employees of companies affected by the COVID-19 pandemic. For more information, please visit https://aisingapore.org/aiap/.
  • AIDP: This is a new on-the-job training programme in data curation for AI solutions. Designed for diploma holders who are Singaporeans, AIDP is a 6-month, full-time programme which comprises 2 months of coursework on data curation and basic data engineering techniques and 4 months of hands-on training on a real data project. AISG will train 100 individuals or employees of companies affected by the COVID-19 pandemic. For more information, please visit: http://aisingapore.org/aidp/.

Incentives for local companies to adopt AI technologies

As part of its COVID-19 initiatives, AISG will enhance our support to local companies to embark on the AISG 100 Experiments – Industry Track (100E4I) programme. 100E4I is a 9-month programme which helps local companies translate and adopt AI technology to solve business challenges, upskill employees, and improve productivity during these uncertain times.

AISG typically provides in-kind support of up to S$180,000 through AI Engineering team within AISG who work on the company’s AI problem statement, and the company is required to match with 30% in cash and 70% in-kind. Under the COVID-19 initiative, the cash contribution from local companies is reduced to 10%.

100E4I is part of AISG’s flagship programme, 100 Experiments (100E), which helps companies solve their pain points through AI solutions and empowers them to build their own AI teams. An organisation can propose as a 100E project, a problem for which no commercial off-the-shelf AI solution exists, but which may be solved by AISG’s research and engineering team within 9 to 18 months.

“AI has the potential to transform our economy and improve lives. We need to invest in learning, developing and investing in skills and technologies, to cope with the uncertainties of today and the future. Singaporeans and local companies should leverage the available training programmes and incentives offered to upgrade their digital skills and adopt AI. This will help businesses and workers seize new opportunities that AI provides, and be better equipped and prepared to harness the benefits of AI as we transit to the new normal, post COVID-19,” said Mr Tan Kok Yam, Deputy Secretary, Smart Nation and Digital Government and Co-Chairman, AI Singapore Management Committee.

More details on how to apply for them are available on https://aisingapore.org/covid-19-initiatives/.

updated on 13 May 2020

mailing list sign up

Mailing List Sign Up C360