Get SGD58.85 off your premium account! Valid till 9 August 2021. Use the Code ‘SGLEARN2021’ upon checkout. Click Here

TagUI v6 User Experience Release

Announcing the release of the latest version of TagUI!

New Features

  • Deploy double-clickable shortcuts easily using -deploy / -d option #692
  • Run flow options with one letter, -h instead of -headless
  • Click on text using OCR, like this: click your text here using ocr #702 / #736
  • Use indentation instead of {} for code blocks #746
  • Access live mode directly from the command line with tagui live
  • Use the exist() function to check until whether a given element exists #651

Breaking Changes

  • Makes chrome the new default browser #691
  • Makes visible chrome browser mode the default mode
  • .tag extension is now mandatory
  • echodumpwritecheck steps no longer use quotes for strings, consistent with other steps #693
  • All run options must be run with a leading dash -, like -headless
  • Run flows that don’t use the browser with -nobrowser / -n option #715

Bug Fixes

  • Fixes “did not close properly” error in chrome #690 / #699 / #735


  • No more (often unwanted) default saving of log files, To enable logging, create an empty tagui_logging file in the tagui/src folder.
  • Documentation overhaul and migration to readthedocs
  • New logo
  • Hindi keywords updated

Deprecated features

  • R integration #745
  • Upload run option #753

This release introduces many changes to improve the user experience when writing, running and deploying TagUI flows. It also brings a documentation overhaul and migration to readthedocs.

The release notes on GitHub are here.

TagUI is a Brick (pre-built solutions) from AI Makerspace, a platform offered by AISG to help SMEs and start-ups accelerate the adoption of AI in Singapore.

A Better Visualization of L1 and L2 Regularization

Here’s an intuitive explanation of why L1 regularization shrinks weights to 0.

Regularization is a popular method to prevent models from overfitting. The idea is simple: I want to keep my model weights small, so I will add a penalty for having large weights. The two most common methods of regularization are Lasso (or L1) regularization, and Ridge (or L2) regularization. They penalize the model by either its absolute weight (L1), or the square of its weight (L2). This begs the questions: So which one should I choose? and why does Lasso perform feature selection?

The Old Way

You often hear the saying “L1 regularization tends to shrink the coefficients of unimportant features to 0, but L2 does not” in all the best explanations of regularization as seen in here and here. Visual explanations usually consist of diagrams like this very popular picture from Elements of Statistical Learning by Hastie, Tibshirani, and Friedman:

visual explanation of regularization

also seen here in Pattern Recognition and Machine Learning by Bishop:

another visual explanation of regularization

I have found these diagrams unintuitive, and so made a simpler one that feels much easier to understand.

The New Way

Here’s my take, step by step with visualizations. First of all, the images above are actually 3 dimensional, which do not translate well onto a book or screen. Instead, let us get back to basics with a linear dataset.

First, we create a really simple dataset with just one weight: y=w*x. Our linear model will try to learn the weight w.

original equation

Pretending we do not know the correct value of w, we randomly select values of w. We then calculate the loss (mean squared error) for various values of w. The loss is 0 at w=0.5, which is the correct value of w as we defined earlier. As we move further away from w=0.5, the loss increases.

unregularized loss

Now we plot our regularization loss functions. L1 loss is 0 when is 0, and increases linearly as you move away from w=0. L2 loss increases non-linearly as you move away from w=0.

L1 loss
L2 loss

Now the fun part. Regularized loss is calculated by adding your loss term to your regularization term. Doing this for each of our losses above gets us the blue (L1 regularized losses) and red (L2 regularized losses) curves below.

regularized loss

In the case of L1 regularized loss (blue line), the value of w that minimizes the loss is at w=0. For L2 regularized loss (red line), the value of w that minimizes the loss is lower than the actual value (which is 0.5), but does not quite hit 0.

There you have it, for the same values of lambda, L1 regularization has shrunk the feature weight down to 0!

Another way of thinking about this is in the context of using gradient descent to minimize the loss function. We can follow the gradient of the loss function to the point where loss is minimized. Regularization then adds a gradient to the gradient of the unregularized loss. L1 regularization adds a fixed gradient to the loss at every value other than 0, while the gradient added by L2 regularization decreases as we approach 0. Therefore, at values of that are very close to 0, gradient descent with L1 regularization continues to push towards 0, while gradient descent on L2 weakens the closer you are to 0.

This article was originally published here.

Related Story

Automating Body Temperature Collection in the Battle Against COVID-19

Since the Ministry of Health raised the DORSCON level from yellow to orange on 7 February, 2020, many organisations in Singapore have put in motion their business continuity plans. The National University of Singapore (NUS) is no exception. One of the measures in place requires all staff and students to take their temperature twice daily (morning and afternoon) and declare them in an online system.

As an office hosted in NUS, AI Singapore (AISG) has also complied with this measure to ensure the safety and well-being of the university community.

The online system consists of a login page which brings the user to another page where she keys in her recorded temperature, choosing the AM or PM time slot as appropriate.

In addition to this mandated data capture, AISG also found it useful to maintain a department level sheet which shows at a glance all the temperatures recorded by its personnel. This sheet can be customised as needed to show additional information like temperature ranges.

In order to reduce the tedium of getting into the system as well as doing double data entry, our RPA (robotic process automation) team came up with a TagUI workflow that does just that.

Now, all the user needs to do is maintain a small text file containing her name and password for the tool to access the system. When run, the tool will prompt her to provide her temperature at the start and thereafter run on its own to feed the data into both the NUS system as well as the department sheet.

The video below shows the tool in action.

If you are interested in the workflow code, you can view it below.

ask Enter your temperature in celsius:
user_temp = ask_result

js begin
date = new Date()
datestring = String(date.getDate()) + '/' + String(date.getMonth()) + '/' + String(date.getYear())
if (date.getHours() > 11) {
    amOrPm = 'P'
} else {
    amOrPm = 'A'
offsetToRight = 2 * (date.getDate() - 10)
if (date.getHours() > 11) {
    pmOffset = 1
} else {
    pmOffset = 0
offsetToRight =  offsetToRight + pmOffset
js finish

if exist('UserName')
    type UserName as nusstf\\`username`
    type Password as `password`[enter]
select tempDeclOn as `datestring`
select declFrequency as `amOrPm`
type temperature as `user_temp`
click Save

keyboard [win]r
wait 2
keyboard chrome[enter]
wait 2
keyboard [win][up]
wait 0.2
keyboard [ctrl][l]
keyboard [ctrl][v][enter]
wait 15
keyboard [ctrl]f
wait 0.5
keyboard `name`[enter]
wait 0.5
keyboard [esc]
keyboard [ctrl][right]
keyboard [right]
keyboard `user_temp`[enter]
wait 3

TagUI can be used to automate tasks like this which are mundane, repetitive and performed on a large scale. Imagine saving 3 minutes daily per person for an organisation of 10,000. This translates to 10,000 man-hours a month!

TagUI is a Brick (pre-built solutions) from AI Makerspace, a platform offered by AISG to help SMEs and start-ups accelerate the adoption of AI in Singapore.

Can Your Problems Be Solved by AI?

Two simple questions to determine whether your problems should and could be solved by AI

I am an AI Consultant at AI Singapore. I help teams to undertake the development and implementation of AI models within their organisations.

One of the questions frequently asked by clients is ‘can my problem be solved by AI?’. To answer that, I suggest asking two questions to assess the possibility of using AI.

Question 1: Can the problem reasonably be solved by IF and ElSE statements?

The first question serves as a filter to avoid using a chainsaw to cut butter.

It is unnecessary to use AI if the problem is solvable by IF and ELSE statements (i.e. rules-based system). The rules-based system is more explainable and cheaper and faster to implement.

AI excels when the problem is too complex to be handled by rules-based systems. For instance, it is impossible to write rules to identify dogs in pictures. AI could look through thousands of dogs images to learn how to identify dogs without any explicit rules written.

Note the ‘reasonably’ in the question. AI could be considered too if the solution requires thousands of IF and ELSE statements, as such complex rules-based system will eventually become hard to maintain and update.

Question 2: If you pass the same dataset to an industry expert, could the expert make reasonably accurate predictions?

The second question serves as a sanity check on whether the dataset contains relevant information needed to train an AI model.

If the use case is to predict hospitalisation of kidney patients, the AI model won’t be able to learn if the dataset contains only patients’ favourite colour. Sounds simple and commonsense? Not really.

“It is a capital mistake to theorize before one has data. — Sherlock Holmes

Datasets could have hundreds of columns and sometimes it is hard to tell if the dataset truly contains relevant information for the intended use case. When asked, companies would like to believe their datasets are comprehensive, but it often turns out otherwise.

The best way to determine whether a dataset contains relevant information?

  1. Pass the same dataset to industry experts and check if they could make reasonably accurate predictions.
  2. Ask the industry experts their thought process in making their predictions.

There could be relevant information in the dataset if the industry experts perform well. However, industry experts have the prior background and tacit knowledge which might not be represented in the dataset. The AI model won’t be able to learn if this knowledge is not represented in the dataset.

For instance, if the use case is to predict which companies are likely to default on payments, the industry experts might already know which are the infamous customers that frequently default on payment. They will probably perform well even if they are just given a list of companies’ names, but the AI model won’t have access to the same tacit and background knowledge.

Determine the right approach, then ensure relevant data is available

Can your problems be solved by AI? Think about using AI only when simple IF and ELSE statements cannot help, then ensure all relevant information is represented in the dataset.

Lex Fridman Interview – Pieter Abbeel

For those of you who are in the field of Artificial Intelligence, you would have come across Lex Fridman from MIT. He has a series of videos where he interviews many prominent figures in the field of Artificial Intelligence. These interviews provide many nuggets of information, for instance, research directions, constraints and concerns in the AI field. If you have not started digesting it, here is the YouTube playlist to go through.

Pieter Abbeel is a professor at UC Berkeley, director of Berkeley Robot Learning Lab, and is one of the top researchers in the world working on how to make robots understand and interact with the world around them, especially through imitation and deep reinforcement learning. (Description taken from Lex Fridman site.)

This talk, although short, contains many nuggets on Robotics. The video started off with an interesting question.

How long does it take to build a robot that can play as well as Roger Federer?

So immediately, Prof Pieter was thinking about a bi-pedal robot and one of the questions that comes to mind is, “Is the robot going to play on a clay or grass court?”. For those not familiar with tennis, there is a difference between the two, where clay allows sliding and grass does not.

Lesson 1: We Bring Assumptions into Our Thought Process

So the assumption made was that the robot is bi-pedal until Lex pointed out that it need not be so. It just needs to be a machine. This showed me that sometimes when we are thinking about how to solve problems, we might unknowingly bring in certain assumptions. To effectively solve the challenge, it might be worthwhile to take a step back and check our assumptions.

Lesson 2: Signal-to-Noise Training

I found it interesting that Prof Pieter, when looking at how to train a robot to solve a particular problem, approached it from a signal-to-noise point of view. What that means is, how can one send as much of the signal to the robot, so that it can learn and perform a task better and faster. For instance, looking at the autonomous driving problem. Is it better to have the robot drive and learn at the same time (through reinforcement learning), or observe how a human drive and, through observation, pick up the necessary rules of driving? Or can simulation be used to train the robot to a certain level and then move the learning over to the actual environment?

Such a thought process tells me that Artificial General Intelligence (AGI) is still a distance away because human design/decision is still needed to ensure that our AI learns the correct behavior and in an efficient way.

Lesson 3: Building Suitable Level of Knowledge Abstraction

There was a discussion on building reasoning methods into AI so that they can learn the existing world better. I am on the same page here. In my opinion, what is stopping our current development from moving AI to AGI is the knowledge representation of the world. How can we present the world in terms of symbols and various abstraction levels, to teach the AI to move through these different abstraction level so as to continue the necessary reasoning.

For instance, when do we need to know that an apple is a fruit and when do we need to know that an apple might not be a fruit but a provider of vitamins and, continuing, this apple provides Vitamin A which is an antioxidant? How do we move through the different entity/label and their representation so that we can build a smarter AI?

I am very interested to understand knowledge representation/abstraction and how we can build it into our Artificial Intelligence but let us see if there is a suitable opportunity to pursue this research direction. 🙂

Lesson 4: Successful Reinforcement Learning is about Designing

Can we build kindness into our robots?

That was, I believe, the last question asked and Pieter mentioned that it is possible, which I concur. What we need is to build “acts of kindness” into our objective function and ensure that we send back the right signal/feedback to ensure these “acts” stay with the robot.

We have come very far when it comes to Reinforcement Learning, given the development on the deep learning front. But I feel that at the end of the day, what is going to make reinforcement learning agents perform to specification will greatly depend on the AI scientist, how they design the objective function, how fast we can send the feedback to the agent, how the agent understands the signals and many more. Designing the agent behavior and environment is an iterative and experimental process. There is only a very small chance that we get it right on the first try, so be prepared to work on it iteratively.

If you are interested to discuss this article or have any AI related questions, feel free to link up with me on LinkedIn.

mailing list sign up

Mailing List Sign Up C360