Get SGD58.85 off your premium account! Valid till 9 August 2021. Use the Code ‘SGLEARN2021’ upon checkout. Click Here

AI Singapore’s Open Source AI Bricks: a 20-Year Personal Journey

This past two weeks AI Singapore announced the release of two free and open-source AI Bricks:

  1. PeekingDuck – a computer vision (CV) framework for inference that allows ANYONE with basic scripting and CV skills to quickly build up an end-to-end CV pipeline in as little as 4 lines of code. See PeekingDuck: A Computer Vision Framework | AI Singapore
  2. SG-NLP – a series of Natural Language Processing (NLP) tools based on research done by Singapore NLP research groups from our institutes of higher learning (IHL) and research institutes (RI). See SG-NLP is Launched! | AI Singapore

Both are available under the business-friendly open-source Apache license, and can be downloaded from Github or simply with a simple Python pip install command.

This is in addition to AI Singapore’s earlier released open-sourced RPA tool – TagUI (TagUI | AI Singapore) – the world’s most popular open-source and free RPA tool with more than 80,000 downloads worldwide, being used in more than 100 countries and 4000+ stars on Github.

Other tools and frameworks in our roadmap include Federated Learning and AI/MLOps which will be released in the next few months.

While the availability of freely available open-source tools is not new, what is new is that these are the first open-source tools released targeting industry adoption, instead of just academic publication and research, that is funded by the Singapore government through the AI Singapore’s AI Makerspace initiative.

These tools were built based on the engagement of more than 500 organizations in Singapore in the last 4 years, working on more than 75 real-world AI projects through the 100Experiements programme and having deployed more than 30 of them to date. We distilled the common requests, repeatedly required features and looked at ways to automate and simplify so that we can help our SMEs in particular to accelerate their adoption of AI.

This has been a personal two decade-long journey to get the Singapore government to adopt open-source, not just as a consumer, but as a producer of open-source tools for, not just the Singapore industry, but for the world.

As a Singaporean entrepreneur, whether I was working in a large local enterprise, small-medium company or my own startup, I wanted to make use of the research generated by our Singapore researchers. The research was paid for by Singaporean tax dollars, and I believed we should have easy access to them, instead of having to pay again through complicated license agreements and fees.

I remember having multiple conversations with IHLs and RIs the last twenty years to have them open-source their tools and research and make them readily available to our SMEs and start-ups, but often the response was – it was not their KPI and they need to execute a license agreement, charge a fee and have a record of who is using their tools and research.

Do not get me wrong, I wholely believe in sharing the upside with the researchers, IHLs and RIs, if the venture making use of their research output generated handsome rewards. This could be through revenue share, royalties etc, after I had the opportunity to use and deploy the research into the global market, sell, generated revenue but not before I even started. There are several examples of such license which basically says “please go ahead and use our research, code, data but if you make more than $500,000, you will have to sign a commercial agreement with us and pay us $50,000 per year”.

The complex license agreement and sometimes high (by a start-up standard) license fees required upfront made using our Singapore research difficult. Even more challenging was that most of the research was hidden behind a proprietary license where only a compiled binary was provided with no access to the source code. This did not allow my engineering team to enhance the code or fix any bugs and we need to go back to the original researcher in the IHL or RI.

It did not help that these IHL and RI do not and cannot offer any service level agreement (SLA) when it comes to when the code could be fixed. So if I license their research and implement their code into a customer’s system, and if things break, there is no way I can get a guaranteed fix in a reasonable amount of time. It was “best effort and depends on our researcher’s availability”.

Fast forward to 2016, when I was offered a role in AI Singapore, one of the things I recommended and asked for was the mandate to open-source the tools we built and share them freely and openly with the industry under a license like the Apache license. I was fortunate to have strong support from the National Research Foundation (NRF) team, as they wanted to explore more innovative means to get our industry to adopt NRF funded research.

TagUI, PeekingDuck and SG-NLP are just the beginning. We have a talented and dedicated AI Engineering team in AI Singapore today. Many of them coming through our AI Apprenticeship Programme (AIAP) – another innovation we created in AI Singapore to groom the next generation of Singaporeans in AI and open-source development.

We are planning on more tools developed by AI Singapore’s AI Engineering team. We will also be launching new programmes and initiatives to raise the standard of AI and cloud computing and encourage more open-source development in the local community. Watch this space.


SG-NLP is Launched!

Bridging NLP research and industry in Singapore

The boundaries of Natural Language Processing (NLP) are constantly being pushed by researchers. However, the code for each new model is usually written in a different style and structure. Industry and NLP practitioners usually have to spend time converting the codebase into a more consistent and maintainable style. 

NLP Hub’s SG-NLP initiative aims to bridge this gap between the industry and Singapore-based research groups to accelerate the growth of translational research in NLP. SG-NLP has two main components – an interactive demo website and a Python package.


SG-NLP Demo Website

The demo website showcases the various models and techniques developed and published by NLP groups in Singapore. This website provides a catalogue of various recently published NLP models.


Each model has an interactive demo page where visitors can test out the model with various examples. Results from the model are visualised in easy to understand formats. To illustrate this, the results from the default example for the relation extraction model are shown in the following images.


Our demo website aims to help non-technical decision makers visualise the model’s outputs and assess its performance. The demos also help technical professionals communicate and illustrate what their proposed models are supposed to do. 


SG-NLP Python Package

Once a team has decided on which model they would like to use, they can then make use of the second component of SG-NLP – the sgnlp Python package. You can install sgnlp with pip install sgnlp or refer to our Quickstart guide to get started. Influenced by HuggingFace’s transformers library, sgnlp allows users to access state-of-the-art NLP models with a few lines of code. 

Both the SG-NLP demo webpage and the SG-NLP models Python have been launched! The SG-NLP team is also looking forward to engaging the community to help add even more models to this list. If you are interested in contributing to this effort or have any questions, please feel free to contact us or join our Community

Update : We hosted a Zoom webinar on Sep 2 to share and answer questions on SG-NLP. You can view the recording below.


AI Summer School returns with a focus on AI in Fake Media and Healthcare

AI Singapore’s AI Technology Summer School (AITSS) 2021 introduced a refreshing lineup of guest speakers and experts from the academia and industry, coupled with exciting competitions and workshops to participate in. Taking place from 2 to 6 August, more than 140 participants from 20 countries gathered virtually to explore the latest AI innovations, applications, and real-world use cases of AI in domains such as fake media, healthcare, e-commerce, social good etc.

Fake Media and Healthcare
This year’s spotlight was on AI in Fake Media and Healthcare – two major areas where AI applications are crucial in tackling increasingly complex challenges.

On Fake Media, A/Prof Chen Change Loy (NTU), A/Prof Terrence Sim (NUS) and A/Prof Liu Ziwei (NTU) delivered lectures on the current state of the art of AI in manipulated media, existing techniques, and future directions in detecting them. Participants were also introduced to the Trusted Media Challenge, AI Singapore’s inaugural prize-based challenge, to learn how we are encouraging AI innovation from the broader community to combat fake media in partnership with CNA and Straits Times. 

On Healthcare, participants had the opportunity to engage leading experts such as Dr Pavitra (A*STAR), Assistant Professor Feng Mengling (NUS) and Dr Fu Zhiyan (IHiS) on best practices for translating healthcare data into clinical value, practical considerations and case studies, as well as understand how AI techniques such as Natural Language Processing, Deep Learning and Transfer Learning were applied in a real-world hospital environment.

 

Inspiring keynote talks from internationally renowned speakers
AI Singapore was honored to welcome three distinguished keynote speakers this year.

AITSS kicked off with an inspiring keynote speech by Professor Milind Tambe from Harvard University on the topic of “AI for Social Impact: Results from Deployments in Public Health and Conservation”. Professor Tambe, who is also the Director for “AI for Social Good” at Google Research India, shared his results of using AI to spread health information effectively and efficiently among vulnerable groups, as well as using AI to predict the placement of illegal traps in wildlife sanctuaries for intervention.

The second keynote speech on “Imitation Learning from One-life Demonstration” was delivered by Professor Ivor Tsang from the University of Technology, Sydney. Professor Tsang is also the Research Director of the Australian Artificial Intelligence Institute (AAII), the largest AI institute in Australia and a world-leading research institute with a vision to develop theoretical foundations and advanced algorithms for AI, and drive progress in related areas such as computational intelligence, machine learning and brain computer interface. Participants deep dived into state-of-the-art techniques in Reinforcement Learning, where current limitations and promising future research areas are.

Returning for this year’s Summer School was Dr Tok Wee Hyong, Principal Data Science Manager at Microsoft, who delivered a captivating talk on “Humans and AI – How we can use AI in Everyday Things”. Dr Tok shared perspectives on the pace of AI innovation and how the international community is leveraging AI to create innovative products and changing the world. Dr Tok, who holds a PhD in Computer Science from the National University of Singapore, has co-authored more than 10 books on product and artificial intelligence including: “Practical Automated Machine Learning”, “Deep Learning on Azure”, and “Predictive Analytics Using Azure Machine Learning” and more.

Hands-on Activities for AI practitioners of all skill levels
Practical experience has always been the cornerstone of the AITSS. With learning being done on a virtual environment, hands-on and interactive activities were emphasised and prioritised for this year’s Summer School to better engage and benefit participants.

A popular activity this year was the “AI for Stock Forecasting Bootcamp”, held in partnership with ai4impact, a global AI community for non-programmers initiated by Terra AI. 100 participants were taught the fundamentals of Deep Learning and collaborated in groups of 5 to create AI models that aimed to predict real-world stock prices. Teams competed against each other until the early hours of the morning during Trading Day, where they put their models to the test on the live trading for 2U, Inc (Ticker: TWOU). During the final day of the AITSS, the top 4 teams were invited to present their model development process and trading strategies used to a panel of judges from AI Singapore, Terra AI and DBS! Awards were presented to the winning team SG Tigers, comprising Matthew Lau, Raymond Harris, Gerald Heng, Partha Pratim Kundu and Gaius Ting – Congratulations!

AITSS also had the opportunity to host 3 other workshops by Assistant Professor Donny Cheng (SIT), Dr Nathaniel Ng (AWS) and Dr Dmitry Soshnikov (Microsoft) that gave participants first-hand experience at various aspects of the AI/ML development workflow, as well as knowledge of the various commercial tools available to them such as Apache Spark, Amazon SageMaker, and Microsoft Azure. As the participants were mainly post-graduate students, academic researchers, or industry practitioners, many have commented that these technical workshops were very practical, providing them with valuable hands-on experience.

 

New learning platform for students
Zoom, GitHub, Google Colab, Slack… so many different links, where should I go for the next activity? These were some of the feedback we received last year when COVID derailed our plans to host an in-person Summer School and had to be pivoted to a virtual learning environment. To overcome these challenges this year, the AI Singapore team integrated these micro-services into a new online AITSS learning platform, making learning seamless and enjoyable for participants!

 

While many of us still yearn for the day where we can host a physical Summer School to better facilitate collaboration, networking and learning, we are truly grateful for the support by AWS, SGInnovate, ai4impact, SEA AI Lab, Microsoft, AISG’s staff and most importantly, the participants, in making AITSS 2021 a resounding success!

PeekingDuck: A Computer Vision Framework

AI Singapore’s Computer Vision (CV) Hub has worked on several industry projects over the last few years. We found that while the types of CV problems are very varied, many of these projects share repeated code, such as reading from a video or image. In some projects, even pre-trained CV models for object detection and pose estimation can be reused. To save development time for future projects, we decided to build a framework to simplify CV inference workloads. We have greatly benefited from this framework, and today, we are releasing it as an open-source project for you to reap the benefits as well.

We call this framework “PeekingDuck”. The name is a play on these words: “Peeking” in a nod to CV; and “Duck” from duck typing as we are using Python, a dynamically typed language. PeekingDuck is pip-installable and can be run from the command line, or imported into your Python code or Jupyter notebooks. 


How PeekingDuck Works

PeekingDuck is a modular framework, with nodes as its building blocks. There are currently 5 categories of nodes as shown below:

Categories of PeekingDuck nodes

Different nodes can be bundled together to form a pipeline, where the output of one node will be the input to another. In the example below, 4 nodes form a simple object detection pipeline. The input.live node reads from a webcam and produces “img” or image, passing it to the object detection model.yolo node, which predicts “bboxes” or bounding boxes. 

Simple object detection pipeline.

The essence of CV are the CV models. Thus, we’ve bundled a few pre-trained object detection and pose estimation model nodes with PeekingDuck, which can be used right away. We included these first as they can be used to tackle a wide range of CV problems, and will be adding more model nodes to PeekingDuck over time.

Object detection using Yolo.
Pose estimation using PoseNet.

Solving Real-World Problems

At CV Hub, we are focused on extending CV models to solve real-world problems. This is where PeekingDuck really shines – we combine different model, dabble and draw nodes to solve use cases such as:

  • Social distancing, which was deployed in HP Inc’s factory floors in 2020 to ensure safety of their employees
  • Zone counting, which has many applications such as assessing crowd density and retail analytics
  • Group size checking, which helps ensure that group limits of social gatherings are adhered to
  • Object counting, which can be used to count “objects” such as humans, vehicles, animals
Social distancing.
Zone counting.
Group size checking.
Object counting.

We recognise that many CV problems are unique, and customisation is required. For example, you may need to take a snapshot of a video frame, and post it to your API endpoint; perhaps you have a model trained on a custom dataset, and would like to use PeekingDuck’s input, draw, and output nodes. PeekingDuck addresses this by allowing you to create your own custom nodes, and use them in conjunction with our existing nodes.


Moving Forward

As CV continues to have new developments, we are committed to maintaining and updating PeekingDuck to ensure that it stays relevant. We will also continue to add new features – in fact, we are already working on new model nodes and use cases to be released in a few months. You are welcome to use our Community page to suggest potential problems that could be solved by CV, and we will consider building nodes to solve it, if viable.

As a final case study, we recently embarked on a project with Genesis Gym to use CV to provide coaching feedback for exercises. We paired custom models and coaching heuristics with existing PeekingDuck nodes, and deployed the solution on Google Cloud, all within a challenging time frame. We found that using PeekingDuck greatly cut down our development time and made our code easier to debug – and we hope that it would help you as well.


Find Out More

To find out more about PeekingDuck and start using it, check out our documentation below:

Update : We hosted a Zoom webinar on August 26 to share and answer questions on PeekingDuck. You can view the recording below.


Business Graduate “future-proofs” his Career with AI/ML skills

Giving up a promising full-time job in banking is not an easy decision for anyone to make. But for Kenneth Wang, a Business Administration graduate from the National University of Singapore, it was a risk worth taking, for the opportunity to “future-proof” his career.

Kenneth had been working as an associate with Standard Chartered Bank for two years, rotating across different wealth management functions before taking on a permanent role with the Managed Investments team responsible for the distribution and due diligence of mutual funds and exchange traded funds.

To better analyse the performance of various financial asset classes as part of his investment research, he took up Python programming and machine learning (ML) courses on Coursera.

As his interest in these areas grew, he started to consider a career AI/ML.

“I felt that AI/ML could lead to many job opportunities across multiple industries,” he said.

Studying the job descriptions of various AI/ML roles in Singapore, he noticed that employers were looking for individuals with industry experience. He also found that AI Singapore’s (AISG) AI Apprenticeship Programme (AIAP) was one of the few programmes that provided that experience, with a project phase where apprentices get to work on real-world business problems.

In 2019, Kenneth decided to quit his full-time job to join AIAP, despite the uncertainty of landing a full-time role after the apprenticeship. It helped that his family and friends were supportive. “They were aware of the benefits of AI/ML and felt that the skillsets acquired through AIAP will help to future-proof my career,” he said.

In preparation for the programme, Kenneth reached out to his seniors from earlier batches to understand their experience and get their feedback on AIAP. He also turned to Coursera, Udemy and Fast.ai for courses to bring him up to speed on Python programming and AI/ML concepts.

The groundwork proved to be important.

 

“The AIAP training phase was very fast-paced and we were learning new concepts every day,”

To keep up with the content, he would document learning points or code snippets on a digital notepad for future reference. He also tapped on the knowledge of his mentors and fellow batchmates, and weekly mentoring sessions served as useful checkpoints to internalise what he had learnt and gave him the opportunity to voice out any difficulties that he had.

The highlight of AIAP was the project phase, where he had the opportunity to work on an open-source information retrieval tool “Golden Retriever” for human language queries. Golden Retriever was part of a set of pre-built solutions offered by AI Makerspace to make it easy for teams to integrate AI into their services.

What Kenneth valued, in particular, were the sessions where apprentices came together to share their learnings from the projects and the difficulties that they faced.

“The sharing exposed me to the different areas and applications of AI/ML,” he said. “It was through these sessions that I realised there are multiple ways to solve the same problem, and the best solutions are usually discovered by gathering ideas from different perspectives.”

One important lesson was that “no problem is too big when you break it down into smaller pieces”.

“It is often easy to be intimidated by the huge problem statements that your clients or managers bring to you. Through the project phase of AIAP, I acquired the ability to break complex problems into smaller parts, which makes it easier to test and verify solutions with end users,” he said. “This helps to prepare me for larger business problems that I will encounter in the future.”

A few months before graduating from AIAP in June 2020, Kenneth managed to secure a role with AISG, joining the MLOps working group responsible for developing new tools to automate and monitor the ML pipeline. Part of the job’s attraction for him was the AISG culture where junior developers are able to take ownership of their projects across the full development cycle, while being able to tap on the knowledge of senior managers across different technical functions.

Kenneth is currently working to integrate Gitlab continuous integration/continuous delivery (CI/CD) and data version control with AISG’ internal experiment tracking tool to make ML models and datasets more reproducible.

Looking back, he is thankful for how AIAP has helped him to grow his understanding of AI/ML concepts and develop software engineering skills to build AI systems. It has also given him access to a network of AIAP apprentices whom he can interact with and seek guidance from. “Coming from different industries and backgrounds, the apprentices provide differing perspectives to the problems that we face as AI engineers,” he said. All these have proven to be invaluable as he continues to build up his capabilities and make progress in his AI journey.

AI Fest: For Kids By Kids – A Glimpse into the Future Today

“Go to Google Teachable Machine at teachablemachine.withgoogle.com. Upload the images of dogs, then upload the images of cats. Now train your model,” instructed one of the student facilitators from Yu Neng Primary School. In a few clicks, students from participating schools were able to quickly and easily train their computers to recognise images of cats and dogs to a high degree of accuracy.

This was all part of the inaugural Artificial Intelligence (AI) Festival – For Kids by Kids held by Yu Neng Primary School and AI Singapore for East Zone schools. Identified as an emerging technology in Singapore’s push to develop a digitally-ready nation, Yu Neng Primary School understands the importance of AI and preparing its students and teachers for the future. As early as 2019, the school worked with AI Singapore to teach teachers from the ICT Department the basic concepts of AI. Students from the E-nnovation Club also soon learnt about AI in their CCA.

To give back to the community, the school further partnered with AI Singapore to hold the AI Festival virtually through Zoom on 1 July 2021. The AI Festival was held with the following objectives:

  • Introduce basic AI concepts to participants
  • Demonstrate how AI technology can be used to improve the way we live, work and play
  • Equip participants with the basic skills to build an AI application

Mr Ng Yeow Ling, Principal of Yu Neng Primary School, opened the AI Fest with a welcome speech. Following that, Mr Koo Sengmeng, the Senior Deputy Director of AI Singapore, presented his keynote speech on AI and its learning opportunities. Yu Neng Primary and AI Singapore expressed their gratitude for the strong interest and support by participants and looked forward to more opportunities to collaborate in future.

For the student segment, there were overwhelming responses from 12 East Zone primary schools, namely Bedok Green, Changkat, East Spring, Eunos, Elias Park, Gongshang, Junyuan, Pasir Ris, Punggol View, St Stephen’s, Tao Nan and Temasek. Yu Neng students from the E-nnovation Club acted as facilitators and led student participants through three exciting activities.

In the introduction to AI, participants learnt more about machine learning. Using Google Quick, Draw!, they drew pictures for the neural network artificial intelligence to guess what they represented. The AI learnt from each drawing, enhancing its ability to guess correctly in the future.

In the second activity, participants learnt to build a simple application using Google Teachable Machine to train it to recognise image sets without ever needing to write codes. Similarly, the more images that were uploaded to it, the better application learnt.

The segment culminated in a competition in which students from the 12 schools pit their newly acquired skills against one another to build an AI application that recognises the four different parts of a mystical creature (seen below) shown to them. The catch was that the image was only revealed for 30 seconds. At the end of 45 minutes, most of the schools were able to identify the four parts of the creature – zebra, giraffe, eagle and tiger. However, Punggol View students won the top prize for training their application to recognise the two body parts with the highest area of coverage – zebra and giraffe.

Photo provided by Yu Neng Primary School 

While the students participate in their activities, the teachers and parents attended the AI for Everyone (AI4E)® by AI Singapore. Here, they learnt about what AI is, what it can or cannot do and how it affects our jobs. They even learnt how to build their own simple AI model with online tools without having to learn programming! The hands-on practice used Azure Machine Learning Studio to create a machine learning model to predict HDB resale prices based on information like location and square area.

As a wrap, Mrs Tay-Lee Li Theng, Vice-Principal of Yu Neng, announced the Top 3 winning schools and commended all teams for their excellent work.

Teacher and student participants who attended the Festival gave positive and encouraging feedback.

The teachers from the participating schools were particularly impressed by the confidence exuded by the Yu Neng student facilitators. Student participants also found the activities engaging and fun.

We hope that this will not be the end, but the beginning of an exciting journey of AI discovery for all participants.


Enhancing the Accuracy of Breast Tumour Diagnosis for Better Patient Outcomes

Singapore General Hospital (SGH) is Singapore’s largest acute tertiary hospital and national referral centre, with over 800 specialists offering medical care to about 1 million citizens each year. As an academic healthcare institution, SGH also plays a key role in nurturing doctors, nurses and allied health professionals, and is committed to innovative translational and clinical research to deliver the best care and outcomes to patients.

An area that the hospital is looking into is to improve the speed and accuracy of diagnosing a specific type of breast tumour known as fibroepithelial lesions (FEL).

FEL are biphasic tumours which consist of benign fibroadenomas (FAs) and the less common phyllodes tumours (PTs). Diagnosing FELs can be a challenge as FAs and PTs have overlapping features but require different clinical management. While FAs are usually monitored without further treatment, PTs typically require surgery to remove the tumours. It is very important, therefore, to make the correct diagnosis by analysing the morphological features of cellular FA and benign PT tissue samples obtained from core needle biopsies under a microscope.

SGH and AI Singapore (AISG) collaborated on a 100 Experiments (100E) project that leverages artificial intelligence (AI) to assist the laboratory in solving this problem. A novel bespoke two-stage computer vision model was developed and trained on high-resolution whole-slide images of tissue samples. At the time of development, this was the first known study utilising AI to evaluate core biopsy images of FA and PT.

In the first stage of the model development, the large, gigapixel-scale whole-slide image was divided into multiple, smaller patches. A convolutional neural network (CNN) was then built to identify and extract the tissue samples’ discriminative features within each patch. This was guided by detailed annotations of lesional areas within each slide, as provided by breast pathology experts in the SGH team.

In the second stage, the features extracted from the small patches in the first stage were fed into a recurrent neural network (RNN), which analysed how the lesional patches were arranged spatially within each slide. In doing so, the model was able to take a bird’s eye view of the whole slide and produce an overall evaluation of whether it was an FA or PT.

The AI solution was also accompanied by an explanatory module that provided a visual heatmap of the key discriminative features driving each prediction. This helped end users to better understand the decision-making process behind the AI model.

When tested on a new set of whole-slide images, the model achieved an accuracy of 87.5 per cent in its predictions.

The model was subsequently packaged into a minimally viable solution and is being trialed for use on clinical samples.

Dr Cheng Chee Leong, Senior Consultant with the Department of Anatomical Pathology at SGH, said the collaboration with AISG has been an eye-opening experience for him.

I now have better insights into the AI algorithm development process to solve clinical challenges. I can also better appreciate the immense potential AI could bring to augment and enhance the work of pathologists.

Dr Cheng Chee Leong

With a more objective and rapid detection tool, it can potentially translate into better treatment decisions that could reduce the need for surgical management, alleviate anxiety in patients, and deliver significant cost savings.

Throughout this collaboration, we received strong support from the team at SGH, who provided the clinical domain expertise necessary to architect a good AI model. It was an impressive achievement by the AISG apprentices who built the solution alongside their mentor and project manager, and I'm sure the real-world industry experience will be invaluable for them.

Kevin Oh, Principal Investigator and Head, AI (100E)

Join the AISG AI Community here: https://community.aisingapore.org

Effective Non-Verbal Communication in a Hybrid Work Environment

As vaccinations against COVID-19 approach herd immunity levels and Singapore prepares to treat COVID-19 as an endemic disease, a hybrid working arrangement appears to be a strong possibility. Recent surveys and research by notable companies such Ideo, Microsoft, and McKinsey have clearly shown that the new normal is that of hybrid or flexible working, where employees will work only some of the time on-premise, and the other virtually, whether that is from home or some other convenient location. This has profound impact on employees in aspects such as productivity, learning and communication. As employees, it is good that we start adapting and mastering the skills to do well in these areas.

As an AI apprentice who will graduate after 9 months and enter the industry during this period, one of the more urgent aspects to pay attention to will be communication. With our education having taught us how to communicate verbally so effectively, this article will thus focus oft forgotten aspect of communication –  non-verbal communication, both virtually and in-person.

Virtual Interactions – same same but different

The Mehrabian Theory of Communication states that only 7% of communication it through spoken words, with 38% communicated through one’s tone of voice, and 55% percent through one’s body language. In a virtual setting, with much of your body obscured, how does one still convey sincerity, earnestness and confidence? In very much the same way as one would do in a physical setting, but with some adjustments! Let’s start off with virtual interviews, before we go into how to communicate non-verbally in the office and at virtual meetings.

For virtual interviews, one should dress as one would in a physical interview – this means a properly ironed collared top for the men; ladies can do the same, or don a simple blouse in a neutral or pastel colour. For the men, if you have facial hair, do make sure that it is neat and trimmed, making sure you do not appear bohemian or homeless (yes I know you need the job!). For the ladies, the verdict is still out on makeup. One should decide based on what brings out the best in you.

Once you log on, remember to look into the laptop camera lens, not the screen. Looking into the lens translate into looking into your interviewer’s eyes, which we know will help with building rapport. However, don’t go all dreamy eyed on them! Observe the 50/70 rule – looking into the eyes/lens 50% of the time if you are speaking, and 70% of the time when you are listening. When listening, it is also good practice to nod, clarify and smile to reinforce the fact that you are paying attention.

A picture containing text, person, indoor

Description automatically generated

Joining your company – interacting with your colleagues both in the office and virtually

If you have succeeded in your interviews and have accepted an attractive offer, congratulations!  First impressions (and your subsequent ones too, especially as a new junior to mid level employee) count too. For the first week, a new joiner usually is required to be present physically (of course, if/when the pandemic restrictions are lowered) so one can be onboarded effectively. 

At the office

The rule of thumb for dressing at the office is to dress like how everyone else is dressed at your company. For the first day or two, it is ok to be a little more formal, so that you are not caught being too casual. For tech companies , or tech roles in other companies, the usual is business casual, with some even ok with tee shirts. Whichever level of dressing, go for clothes that fit, are clean and pressed. This projects confidence others have in you, and also frames how others view your work – detailed, complete and done with effort.

On days when you have a presentation, you can amp it up a little by incorporating small details in your dressing, like wearing thicker framed glasses or an interesting brooch. This keeps the audience’s attention on you and what you have to say. The key here though is being subtle.

If you are not a presenter but a meeting attendee, it is good to not arrive empty handed. Bringing along a notebook or your laptop to take down notes shows that you are focused and keen on the discussion ahead. Do not place your mobile phone on the table as well – this shows that incoming messages are more important than the meeting discussion. If you must (well, your pants are too tight or you did not carry a bag), then a good practice is to put it face down.

When you are moving around the office, smile or nod at those you meet along the way. This is courtesy and exudes openness to connect, an important quality of a new hire.

Working from home

For times when you are working from home, you should still pay attention to your dressing. While tee shirts are ok when meeting virtually with your colleagues, you should note that these should be of a solid colour and are not tattered, crumpled or stained. If clients or external parties are present, aim to dress like how you would at the office. And of course, pants are important.

It is also polite and respectful to have your cameras switched on during video calls. You can switch if off only if your company’s culture is one where everyone keeps it off, or when you need to save bandwidth due to the large (usually>50) number of attendees in the meeting.

😄 15 Working From Home Memes That'll Brighten Up Your Day | by Product  Dave 💡 | Product Coalition

Parting advice

Non-verbal communication is subtle and messages are often perceived subconsciously. As a HR professional, I have encountered interviewees who do not switch on their laptop cameras, as well as attended Zoom meetings with external parties who were in their well-worn SAF running tank tops. These first impressions are unfortunately also very lasting impressions! Do avoid such practices if you want to be seen as a credible professional. 

Human Influence on AI Learning

(This article was contributed by the SIT Geeks AISG Student Chapter)

Artificial intelligence (AI) is an interdisciplinary branch of computer science that focuses on creating machines programmed to mimic human intelligence. AI learns through combining large amounts of data and fast, iterative processing, allowing it to learn automatically from patterns and/or features in the data. This is also known as machine learning (ML), a subset of AI. In the past decade, ML has evolved, bringing massive success in applications such as image recognition, recommendation systems, online advertising, etc. ML is now used in employment screening, social justice, and intelligent virtual assistants (eg. Siri and Alexa). However, due to data bias, ML may not be the best for these systems. Hence, this article aims to address how human prejudices and biases affect how AI systems learn.

Data bias in machine learning is a type of error in which certain dataset elements are more heavily weighted and/or represented than others. In general, the data used for machine learning has to be representative of the real world. This is important because this data is how the machine learns to do its job. However, we have many complications even in the real world, which would affect the learning of AI. 

Human bias

All datasets are flawed, emphasizing the need and importance of data preparation. These flaws are a result of human subjective interpretation. In recent years, society has started to wrestle with how much these human biases can influence artificial intelligence systems — with harmful results. An example of human bias is AI recruitment software, which uses automation to aid companies in sourcing candidates, screen numerous resumes while reducing repetitive manual tasks. This software might sound like a great idea and a solution to tedious menial tasks. However, it has been developed to match human admission decisions, doing so with 90 to 95 percent accuracy.

In 1988, the UK Commission for Racial Equality found a British medical school guilty of discrimination, and the computer program it was used to determine which applicants would be invited for interviews was determined to be biased against women and those with non-European names. Even in recent years in 2014, a team of software engineers at Amazon was building a program to review job applicants’ resumes just to realize that the system discriminated against women for technical roles. 

As a result, Amazon recruiters forewent the use of the software due to discrimination and fairness issues. Implementing an algorithm did not solve biased human decision-making however, neither will returning to human decision-makers. 

Is bias all bad?

AI and ML are riddled with human biases, which is not ideal. However, the fact that we are becoming increasingly aware of these issues, forces us to confront these realities. In most cases, AI can reduce human subjective interpretation of data as ML algorithms learn to consider only variables that improve predictive accuracy, reducing human fatigue and emotion errors. This proves that algorithms can improve decision-making, resulting in a fairer process. For example, Jon Kleinberg and others have shown that algorithms could help reduce racial disparities in the criminal justice system in a research paper he wrote. Millions of times each year, judges have to make jail-or-release decisions. An algorithm was trained by them to predict whether defendants were a flight risk from their rap sheet and court records using data from hundreds of thousands of cases in New York City. When tested on over a hundred thousand more cases that it hadn’t seen before, the algorithm proved better at predicting what defendants will do after releasing than judges. Their algorithm’s advice could cut crime by defendants awaiting trial by as much as 25 percent without changing the numbers of people waiting in jail. Alternatively, it could be used to reduce the jail population awaiting trial by more than 40 percent, while leaving the crime rate by defendants unchanged. This is due to judges having a broader set of preferences than the variable the algorithm predicts; for instance, judges may care specifically about violent crimes or racial inequities.

Conclusion

Many existing human biases can be transferred to machines because technologies are not neutral; they are only as good or bad as those who develop them. Additionally, with technological advancements, several approaches have emerged to enforce fairness constraints on AI models, and despite human judgment being necessary to ensure that AI-supported decision-making is fair, we should still embrace AI in the future.




Written by:

Lim Kok Fong, Jodie Moh, Magdalene Yip, Gideon Yip
– 1st EXCO, SIT Geeks AI Student Chapter

The views expressed in this article belong to the SIT Geeks AI Student Chapter and may not represent those of AI Singapore.

References

  1. https://www.theguardian.com/inequality/2017/aug/08/rise-of-the-racist-robots-how-ai-is-learning-all-our-worst-impulses
  2. https://hbr.org/2019/10/what-do-we-do-about-the-biases-in-ai
  3. https://academic.oup.com/qje/article-abstract/133/1/237/4095198
  4. https://www.mckinsey.com/featured-insights/artificial-intelligence/tackling-bias-in-artificial-intelligence-and-in-humans

mailing list sign up

Mailing List Sign Up C360