Get SGD58.85 off your premium account! Valid till 9 August 2021. Use the Code ‘SGLEARN2021’ upon checkout. Click Here

Embracing the new normal with creative thinking and AI skills

The Covid pandemic has underscored the importance of embracing technological disruptions and being resilient, and the need to constantly adapt to new developments. Michelle Loke, like other graduates of AI Singapore’s  AI Apprenticeship Programme (AIAP), is well prepared for these challenges.

Michelle graduated from the Singapore University of Technology and Design (SUTD) with a degree in Engineering Systems and Design in 2015, and was working as a senior transport planner with the Land Transport Authority (LTA) when AI beckoned.

At LTA, her role revolved around transport modelling, data analytics and visualisation projects. To further improve the projects, she experimented with AI and machine learning to gain deeper insights and value from the data,

At the same time, her research led her to appreciate the potential of AI as a game-changing technology and a platform for change. She also found out about AIAP.

“I realised that it was the perfect opportunity and platform for me to build up my technical capabilities, gain deeper theoretical and practical insights and venture into the exciting and innovative field of deep learning.”

Michelle

In 2019, Michelle made up her mind to join the programme, which would require her to quit her full-time job at LTA. That, to her, was one of the most difficult aspects of the decision. “I loved being a part of LTA – the people, culture and nature of the work was uplifting. And knowing that you are making a positive contribution to the long-term infrastructure and well-being of your country was also an important reason why I enjoyed being in LTA.”

But she also knew that the fourth industrial revolution was coming, powered by AI and other emerging technologies. AIAP presented her with the opportunity to be part of these new developments and to equip herself with advanced skills that would enable her to contribute more to future projects. This realisation, together with the support of her family, helped strengthen her resolve. Michelle joined AISG’s fourth batch of apprentices and started her stint in September 2019.

The going was tough, especially in the initial stages of the programme. While the coding she learnt in university and the data analytics skills she developed at LTA came in useful, there was still so much to learn. “The environment was fast-paced and the assignments were challenging as the basics of machine learning were drilled into us,” she recalled.

But the experience, although demanding, was made enjoyable by the camaraderie amongst the apprentices. “We were a diverse group of individuals – from fresh grads to mid-career professionals with backgrounds across various sectors. There was something to learn from everyone,” she said. “With the support of fellow apprentices, all of us made it through.”

The programme enabled Michelle to build and strengthen her foundations in machine learning. Good coding practices were drilled into her by her mentor and fellow apprentices, together with the agile methodology of failing fast and reiterating for continuous learning and continuous improvement. “Failure is inevitable when experimenting; the important thing is that the lessons learned from failing are utilised to improve your next attempt,” she said.

She also picked up important skills in project management when she and a fellow apprentice got to work on a healthcare project for a start-up under AISG’s 100Experiments programme. “I had regular meetings with my project sponsor and from the project managers, I learnt how to manage expectations and honed my people and communication skills.

The start-up was focused on developing customised solutions for skincare and her team was tasked to detect features such as pores, spots and inflammation on images of skin using computer vision. “It was really exciting to create a solution for a real-world problem,” she said. “Being involved in the model development process from start to finish was a new experience.”

AIAP also opened the door to new opportunities in data science and AI. Through the AISG Talent and Career Development team, Michelle landed a job as senior analyst with the SingHealth Office for Insights and Analytics after she graduated in June 2020.

In her current role, she creates machine learning models to empower healthcare professionals to make more informed and strategic decisions. “Being involved throughout the whole model development process – from data extraction to modelling to deployment – gives me a sense of ownership over my projects and the opportunity to understand how the different parts of the model development process fits together,” she said.

Looking back on her nine months as an apprentice, Michelle said she really enjoyed her time in AIAP and had fun. “I met with so many gifted, talented and awesome individuals. The supportive and nurturing environment cultivated at AISG made the experience so worthwhile and enjoyable,” she said. “And with the Covid pandemic accelerating the digital revolution, the creative thinking and AI skills I learnt have more than prepared me for the new normal.”

To find out more about the AI Apprenticeship Programme: 
https://aisingapore.org/aiap
 

Nurturing the Love of STEM With Our Kids

AI Singapore (AISG) recently supported Singapore Hokkien Huay Kuan’s Thinkers Programme and introduced our AI for Kids (AI4K)® to participating students from their affiliated schools.

The Singapore Hokkien Huay Kuan (SHHK) Thinkers Programme started in 2019 with two goals. The first is to develop student’s confidence, analytical and problem-solving competencies. The second is to expose the students to areas of STEM and how technology can improve our way of life.

152 students participated in this year’s Thinkers Programme, accompanied by 26 teachers. Through AI4K, students learn about machine learning concepts, including supervised learning, computer vision, natural language processing, and different uses of AI. Then, they went on to apply their newfound knowledge and build simple applications using block coding software.

Photo taken during AI4K Certified Instructor Bootcamp

We did not miss the opportunity to plus-skill our teachers at the same time. With the support of our training partner – AI Learning Lab – we conducted a 2-day AI4K Certified Instructor Bootcamp for them, familiarising the teachers with the AI4K course materials so that they can conduct AI enrichment classes themselves in the future.

Our AI4K is available here at LearnAI. We have recently refreshed and expanded our series to Beginners and Intermediate course packs, so just register an account and learn them for FREE.

If you are a Singapore school looking to organize schools-wide AI literacy programmes, you can also drop us a note here.

#growourowntimber

AI Readiness Index – Where Are You on the AI Journey?

A couple of months ago, AI Singapore unveiled the AI Readiness Index (AIRI), a comprehensive and easy-to-implement industry-focused framework which enables organisations to self-assess the status of their artificial intelligence (AI) adoption readiness. I talked to Tern Poh from the AI Advisory Team, one of the architects of the framework.

Below is a transcript of the conversation [*].



Hi Tern Poh. How are you today?

Hi Basil, thanks. I’m doing well today. I’m staying at home, working from home. Miss the outdoors. Otherwise, I’m doing fine.

Yes, we all have to stay safe during this period. So, today we will be talking about how to improve the journey of AI adoption in industry. According to a survey done by Gartner, the top three challenges to AI adoption are : (1) skills of staff, (2) understanding AI benefits and uses and (3) data scope or quality. As part of your job to get companies to adopt AI, you go beyond identifying these factors and empower companies to quantify them in order to better understand their AI adoption readiness level. That’s where AI Singapore’s AI Readiness Index (AIRI) comes in. As one of the architects, tell us more about what AIRI is.

So, AIRI in brief is an industry-focused AI readiness assessment framework. So it allows the business units or organisations to assess their AI readiness, meaning where they are right now, and with this, by understanding where they are, it will enable them to identify the gap between their current and desired state. With the gaps identified, it will enable organisations to have a better understanding of where they need to work on in order to improve their AI readiness and to close up the gap. This is important because a lot of times organisations want to embark on a new business initiative and that business initiative might require certain AI capabilities in their organisations for them to achieve that. So without having an assessment of where they are, the organisations might not be able to reach their business objective at the end of the next financial year, for example. So that’s why AIRI is designed for. It helps the company to understand their current state of readiness and to close up the gap between the current and the desired state.

I think it’s a very relevant initiative, and just to go back into the history, what inspired and drove the development of AIRI?

I think AI Singapore, we are in a very fortunate position because we are a national programme office. So, we get to work with companies of different sizes, from start-ups to MNCs and across different industries as well, so we are agnostic. Whichever company or industry needs our help, AI Singapore will be there for them to help them accelerate their AI adoption. Since AI Singapore started a few years ago, we have engaged more than five hundred companies across different sizes and industries, to have discussions with them to understand how we can help them to accelerate their AI adoption. It is through these multiple discussions with companies we started to realise that there’s a recurring theme among these companies. And these companies often have questions for us.

The first question to ask is, how do I know whether I’m AI-ready? This is the question that a lot of companies have at the top of their minds. And sometimes it is also pretty funny, like when we talk to the clients, based on our assessment, hey you’re not data-ready yet. You have to come back to us once you’re more AI-ready, for example. Then in the next question they ask is, how do I know when I’ll become AI-ready? How do I assess it? How do I quantify all these things that you mentioned earlier on? So there’s a recurring theme between these questions from the companies we’ve engaged and that really drove the inspiration and the development of AIRI. We want to create a systematic method or framework to really help companies to understand and to quantify their AI-readiness so that we are able to help them improve it.

And as an organisation that is data-driven, that emphasizes being data-driven, I think this is a natural development. After all, we’ve been engaging with already more than five hundred companies and organisations. But, AIRI, it’s not the first AI-readiness on maturity index. What makes it special compared to the others?

Ya, that’s a very good question. If your google AI readiness index or AI maturity index, you get a lot of returns from the google search. Back to the earlier question when we had a lot companies coming to us, asking the same question, we went out to the Internet and looked at some of the existing indexes or frameworks developed by others. I mean, why re-invent the wheel when we can just adopt it. But, surprisingly, we soon realised that none of the existing AI readiness or maturity index has what we need. I won’t mention names, but some of the existing frameworks that we saw are too complicated. Too complicated in the sense that it goes into areas that we don’t think are needed to determine whether a company is ready for AI adoptions. Things like the HR system, the procurement system, etc. So, some frameworks go way beyond what is really needed for AI adoption. So, these are some of the frameworks that we see. They are too complicated and if they are too complicated, it’ll be hard for any company to do a self-assessment.

This brings me to the next point. There are some other frameworks that we see that are proprietary, meaning some companies have developed a methodology, a framework around how they can assess somebody’s AI-readiness, but you have to pay them a hefty fee to engage their consultants to do the assessment on behalf of the companies. When we are trying to develop AIRI, our intention is to help, especially the local SMEs, the small-, medium- enterprises, to help them accelerate their AI adoption. And I think it’s very hard for SMEs to pay such a high amount of consulting fees to assess their AI readiness. So, given all these backdrops, when we came up with AIRI, we really wanted something that is very easy to implement in a way that companies can do by themselves without any consultants and the framework is simple enough, yet comprehensive to look at all the factors that are really core to AI-readiness, instead of looking at every single aspect of an organisation. So, this is what we see in industry and what drove us to develop AIRI to really help companies accelerate their AI adoption.

So, what we can expect from AIRI are : cost-effectiveness, ease-of-use and something that’s really actionable, right?

Ya, and more than that as well. Other than looking at what is out there, a lot of it is really based on our core learnings. Again, AI Singapore we have engaged a lot of companies across different industries, like I mentioned earlier. So, we have this knowledge of, or you can say a gut feel of how we determine whether a company is AI-ready or not, but we have not formalised it into a framework. So, AIRI is really a formalisation of all our learnings of our past engagements with the companies. It goes beyond referencing what is available. It really incorporates and distills the core learnings and experiences that AI Singapore has gathered over the past few years.

Now I’m eager to get into the details. How does it work AIRI work?

I’ll go through from the high level first, to the finer details. AIRI essentially consists of four main pillars which map to nine dimensions. By dimensions I mean the areas of assessment that we are looking at to quantify whether a company is AI-ready or not. So, the first pillar, perhaps the most important pillar, is the organisational readiness. It looks at four main dimensions. We’re looking at AI talent, AI literacy, AI governance and management support. For AI talent and literacy, they are pretty self-explanatory and straightforward, so I won’t elaborate further on them. What I would like to elaborate on is management support. A lot of times when people think about AI readiness, they think about getting the latest equipment, about hiring a data scientist or sending their employees for courses to improve their AI literacy. All these are things that people typically look at, but what they miss out on is management support and that’s a big mistake. What we have realised is that over the years, for companies without strong management support, it is hard to get projects started and even if the project managed to get started, halfway through it will get derailed. Let me share with you more insights about this. AI is a very powerful tool and when you want to embark on an AI project, typically it involves collaborations across different departments or divisions of business. Without a strong mandate and support from the management, the different divisions or departments wouldn’t be that keen to come together to form a cross-functional team to do a project together. This is what we see typically. AI when it is deployed in the organisation as well, it requires a change in the way that the employees and the organisation functions. So maybe in the past, you use Excel sheet to do some predictions. But with AI the workflow will change. So if you change the workflow, the impact will be felt across different departments. So again, without strong management support, the different divisions or departments in the company might try to sabotage the project because they do not want to change their existing processes. I mean, why change their current processes just to adopt a new technology when it disrupts their workflow, for example. So, that’s why I think management support is very critical.

And the one last dimension about organisational readiness is AI governance. So, we added that in because right now there’s a trend in the industry where people are no longer talking about how accurate is your AI model. I think the industry has moved beyond that. Yes, of course, accuracy of an AI model is important, but what is also important right now is whether the AI model developed is trustworthy, whether it is ethical and whether it is responsible in the way that is being applied. Because AI is a very powerful tool and if it is applied wrongly, it might have unintended consequences for the organisation or for society at large. So, for an organisation to be AI-ready, AI Singapore looks beyond what the technical capabilities are. We also look at the management support and whether they are going to apply AI in the right manner. So, sometimes when we see companies that come to us and they want to apply AI in a manner that we think might not be suitable or might be sensitive, we will tend not to do the projects with them and we try to educate them what are the consequences of doing such projects, so hopefully they also understand about this point.

The first pillar is organisational readiness, so what about the second pillar?

Okay, the first pillar was a bit long, because I think it’s important to share the rationale of why we put certain things inside. I think for the second pillar, it’s more straightforward. For the second pillar, it is a business value readiness. It has only one dimension, which is business use. Overall, this pillar looks at whether the organisation has identified business use case for AI adoption and the potential value it brings to the organisation. So, AI it is just a tool, a technology. Just as any other tool and technology, it can only bring value to the organisation where it is applied in a manner that is relevant to the organisations. A lot of organisations tend to apply AI for the sake of applying AI. That’s not the way we recommend companies to proceed forward to adopt AI. We always ought to anchor AI adoption around business use cases to justify the time and investment required to begin an AI project. I think that’s number one, and number two is that with a strong business use case, it serves as a rallying point for employees across different divisions or departments in the organisation to come together to work on a project together. That’s why the business use case is so crucial. It serves as a justification for the management to invest the time and effort in doing the project and it helps to gather employees across the organisation to come together. This will help to address some of the issues I mentioned earlier.

Just to sidetrack a little, how do you determine the business value of a use case?

That’s interesting also. When we talk to customers or companies, most of the time they will ask us, so what will be the expected value that we will get if we are to work on this project with AI Singapore? For example, it could be things like predictive maintenance or customer segmentation. As much as I would like to give them an answer, the reality is it’s very hard for an AI engineer or technologist to advise the business people what will be the expected ROI or business value of a use case, because only those who are deeply involved in their organisations or who is very familiar with the industry would know what value it will bring when this particular issue is solved. And that’s the reason why for most of the programmes under AI Singapore, we typically require the organisation to nominate a key person that we can speak with throughout the project because this domain expert would know what is the business use case and how should we go about doing the projects. Because if you are just going to apply AI to the datasets, there could be a lot of unintended consequences in the way that the model is being built to the way that are we assess the business value. So, we always require the businesses stakeholder to understand and assess what is the value of each use case. We are not able to advise that.

Okay, so coming back, we covered the first two pillars: organisational readiness and business value readiness. How about the last two?

So, the third pillar is on data readiness. We are looking at the reliability, quality and consistency of data throughout the organisation. Under this pillar, we have data quality and reference data. I won’t go into the details of each, but one question I often get asked is, hey Tern Poh, when you are looking at data, I thought it should be with respect to a specific use case. So, how can AIRI assess the data readiness of my organisation without me telling you what is my specific use case? This is a common question I get asked whenever I share with companies on AIRI. My short answer is this. AIRI looks at an organisational level, meaning at an organisational level, what’s the likelihood that this organisation is ready to adopt AI. We do not go down to each specific use case yet. Of course, we have other ways to look at to assess the data with respect to each use case, but this is not the intention of AIRI right here. For AIRI, we are looking at, at an organisational level, what is the tendency that this organisation is data ready. So, in the questions that we ask for the data quality and the reference data, the two dimensions under this pillar, we are looking at whether the company has policies, processes and employees who are actively maintaining the data quality. The thing is this. For organisations who have formal processes and policies in place to ensure data quality, then it is more likely that for any use cases this company is intending to pursue, there is a higher likelihood that these use cases will have the data ready. So, this is how to we look at it. At an organisational level, whether there are formal processes and policies in place to ensure data quality and reference data.

And the last pillar of AIRI is infrastructure readiness. It looks at whether the organisation has the necessary infrastructure to support data storage, retrieval and AI model training. Briefly put, we are looking at whether the company is using appropriate methods of storing data, so if the company is still using paper to record transactions, not in an electronic format, that is very hard for us to do anything for the company. So, that’s the data infrastructure part. For the machine learning infrastructure part, we’re looking at whether the company has the right equipment to train the AI model and to deploy the AI model once it is ‘live’. So, these are the four main pillars of AIRI.

That is really a very comprehensive assessment being done. At the end of the whole process, what are the ways you see companies can benefit from using AIRI?

I think this is a very important question. After going through this AIRI assessment, how do I benefit from it? At the end of the assessment, the company will receive a report, a customised report based on their AIRI result. The result will classify their company into four different readiness state.

From the lowest, that will be AI unaware, to the highest, that will be AI competent. So, we basically classify companies into four different categories. Under each category, we will have a recommendation to the company in terms of what AI adoption approach it could take.

In the lower two AI classifications for AIRI, we have the AI unaware and AI aware. These are the companies that we think are able to adopt AI. In fact, any company could adopt AI. It doesn’t mean that, no, if I am AI unaware, I am not able to adopt AI technology. For such companies, they could also adopt AI technology by buying something that is commercially available off-the-shelf just by installing the software on their system, they can start using the AI application. Every organisation could benefit from adopting AI, that is one thing I want to highlight. For the lower two categories, AI unaware and AI aware, for such companies they could adopt AI. Just that they shouldn’t be spending their efforts to develop their own AI model. They can looked out to the industry to see what’s out there and to buy such solutions.

In the higher two categories, we have the AI ready and the AI competent. What separates these two is that for AI ready companies, typically for such companies they have the software engineering capabilities and for such companies they could make an API call to a pre-trained AI model or to one of the cloud providers’ API services to integrate AI features into the product. So, this is what we see for companies which fall under AI already. For AI competent – the level four – for such companies in the AI competent region, they are able to develop their own customised AI if none exists in the industry. These are the companies which develop AI solutions to meet their internal needs, it goes beyond just looking at what is available in the industry.

One thing I want to highlight is that AIRI does not insist that every organisation needs to be at AI competent. People have this misconception that if I’m AI unaware, my end goal should be AI competent. That’s not what we are trying to emphasize. What we are trying to emphasize here is that AIRI helps you to understand where you are right now, at your current state and where you should be should be based on your business objective. Let’s say, for example, if a business or an organisation, based on their business objective, they are not intending to build their own AI engineering team. All they want to do is to start integrating or buying some AI solution to solve their internal needs, maybe all they need to do is to be at AI aware stage. It becomes very clear, from AI unaware, they can move to AI aware and that is where they need to be. There is no need to go beyond that level if their business objective doesn’t require that. I want to highlight this because not every company is looking to develop their own customised solutions. Sometimes, all they need is just to have some AI features in their products. But, although we say that we don’t recommend every company to be AI ready or AI competent, but what we would like to encourage is that companies should at least be at AI aware stage, because if a company is at AI unaware, they are missing out on a lot of the good things that AI can bring. For instance, for companies who are at the AI unaware, they wouldn’t know what are the possibilities that they can do with AI and what are the typical use cases that they can implement with AI. These are the companies at the AI unaware stage, but once they get to the AI aware stage, at least the organisation has the capability to understand what are the common use cases of AI and whether they want to apply AI or not, that is a separate issue. But at least in the organisation, the management knows what is AI and whether AI can be used in their internal use cases. That is very important because right now, the world is moving towards the age of AI. If a company is happy enough to be at the AI unaware stage, they might be missing out on a lot of the opportunities. So, as much as possible I’ll try to push companies at least to the AI aware stage.

So, by doing this AIRI assessment, companies can get to know where they are in one of four readiness states and with that comes a list of recommendations to guide them in their future journey on AI adoption.

Yes, Basil, you are right. So, even at AI Singapore, if the listener goes to the AIRI website, they will see that we actually map the AI Singapore programmes, such as 100 Experiments, to different readiness level based on AIRI. For instance, for a company who is AI unaware, they will not be suitable for a 100 Experiments, because for 100 Experiments, we have certain prerequisites a company has to fulfill before we will consider them. At AI Singapore, we are also mapping our programmes to each category of AI readiness based on AIRI. So, with the AIRI assessment done, it becomes very clear to the companies what are the programmes that they could take if they want to improve their AI readiness. This is crucial because otherwise, people always talk about improving their company’s AI readiness, but without understanding where they are right now, how do they even identify the right programmes or the right approach to increase or improve their AI readiness? This is where I see the value of AIRI is. It helps companies know where they are right now today and based on their business objectives, which give them their intended goals in the future, with these two points mapped out, then they can identify whether there’s a gap. And if there is a gap, they can go out to look at what are the appropriate programmes that can help them close up each of the gaps identified under the nine dimensions of AIRI.

So, how can companies take the AIRI assessment?

That’s easy. They can just go to the AI Singapore website, there’s a site dedicated to AIRI. They can do it there and it is free. When I say it’s free, it is really free. There’s no advertisement as well, it’s not ad supported. And AIRI just consists of nine questions, nine multiple choice questions. It can be completed within ten to fifteen minutes. Very short and sweet assessment. And at the end of the assessment, the companies will be able to download a customised report immediately, there is no need to wait. Once they are done with the AIRI, they can download the report and look at what AI Singapore actually recommends to them in terms of AI adoptions and AI readiness.

Great. So, before we close, what advice do you have for companies that are looking to adopt AI?

I think for any company who wants to adopt AI, it’s very important to determine the current state of readiness. Where they are today and looked at their business objective, where they want to be in the future. It’s only when you have these two goals in mind, then the company can chart out a journey of how they can get from where they are right now to their destinations. So, once you have a journey mapped out, it become clearer to the organisations what are the things that the organisations need to take. For example, if you go on a journey to the supermarket near your neighbourhood versus a journey to another country, it could be a road trip, the journey distance, the journey itinerary, the journey equipment that you need to bring along will be totally different. So, if you put that analogy back to the question, by knowing the current state of readiness, the future state or the destination the company needs to be at, once you have these two mapped out, it’s then crystal clear to the companies how long will the journey take, what are the talents, the training programmes that these companies need to bring in in order to help them achieve their goals and succeed in their journey. Otherwise, it will be very hard, if you ask anybody to plan for a journey without knowing the destination. It’ll be impossible to plan for the journey. And, of course, please do the AIRI assessment today.

Yes, of course. Our listeners out there, if you are leading an organisation or wanting to get onto the AI journey, just go to https://aisingapore.org/airi/ to get your assessment done today. Thank you, Tern Poh, for sharing with us today.

Thanks, Basil. Thanks for inviting me to share my thoughts on AIRI as well. It’s my pleasure to share with the audience the stories and thought process behind AIRI. I would just like to highlight again that if any company wants to know more about AIRI, wants to know more about how to apply AIRI to their organisations, to help them in their AI digital transformation journey, they can always reach out to AI Singapore. The AI Advisory team where I am from will have a more detailed discussion with them. Thanks, Basil. Thanks, everyone, for listening. Thanks for your time.



[*] This conversation was transcribed using Speech Lab. The transcript has been edited for length and clarity.

Related Story

Synergos Is Officially Launched!

In previous posts (here and here), we gave a preview of AI Singapore’s Federated Learning platform Synergos. Today we are happy to announce that Synergos is officially launched!

We begin with a recap of what Synergos is and its main components.

The motivation for Synergos

Data is at the core of machine learning. Nevertheless, in many real-world projects, a single party’s data is often insufficient and needs to be augmented with data from other parties. However, there are also many concerns (regulatory, ethical, commercial etc.) stopping parties from exchanging data.

An example can be found in the healthcare domain. Individual hospitals alone typically have a limited amount of local data to build a robust model. There are existing studies (this one as an example) showing the benefit of using data from more hospitals to build models. Nevertheless, even though hospitals are convinced of the value of sharing data, there are a lot of regulatory concerns stopping hospitals from sharing data, since healthcare data is usually viewed as sensitive personal data in either general or sectoral data protection regulations, e.g. GDPR or HIPPA.

Federated Learning is an emerging privacy-preserving machine learning technology. It enables multiple parties holding local data to collaboratively train machine learning models without exchanging their data with one another, hence preserving the confidentiality of different parties’ local data. 

The design of Synergos

Synergos is a platform which AI Singapore has been building to make Federated Learning more accessible and sustainable. The diagram below gives an overview of the key components of Synergos. For a detailed description of each component, please refer to this post

Broadly, these components are grouped into three layers of functionalities.

  1. Federated training, whose aim is to make Federated Learning simple and user-friendly.
  2. Model management, whose goal is to make Federated MLOps simple.
  3. Platform management, whose goal is to make Federated Learning sustainable.

Synergos makes Federated Learning accessible

In conventional machine learning, it is commonly assumed that all data are independently and identically distributed (IID). Or in simple words, it assumes that all data are from the same generative process and the generative process does not have memory of past generated data. However, in Federated Learning, as different parties do not really see other parties’ data, it cannot be assumed that they all follow the same generative process. Special care is needed to address such non-IID data. Otherwise, the model derived with Federated Learning may not converge and generalise to different parties’ data or take longer to converge. Many federated aggregation algorithms have been proposed to address this problem. 

Synergos makes Federated Learning user-friendly and accessible, taking away the burden from the users in implementing those federated aggregation algorithms. In Synergos, the Federation component implements a number of those algorithms. The most basic aggregation algorithm is FedAvg. Besides this, the current version of Synergos also supports more advanced aggregation algorithms, including FedProx, FedGKT, etc. More aggregation algorithms will be supported in future versions.

Synergos further reduces the burden with the Orchestration component, which supports auto-tuning of multiple Federated Learning models with different configurations of aggregation algorithms, aggregation settings, model hyper-parameters, etc.

Synergos makes Federated Learning sustainable

Usually, different parties incur non-negligible costs in acquiring and cleaning their data. They rarely altruistically share their data with others and risk losing their competitive edge. These parties would be more motivated to share their data when given enough incentives, such as a guaranteed benefit from the collaboration and a fair higher reward from contributing more valuable data. Otherwise, without any party motivated to contribute data, it could be detrimental to the sustainability of Federated Learning. 

Synergos makes Federated Learning sustainable by building the Contribution & Reward component to evaluate contributions and reward different parties fairly based on their contributions.  

We are still actively working on this component, and it is not yet available in the current version. We plan to implement model reward, which is the outcome of a research supported by AI Singapore. Conventionally, reward is associated with monetary gain. While this remains a natural and viable option, there are scenarios where monetary returns are not preferred or even impossible. Innovatively, Model reward rewards the participating parties with models of different quality based on their contribution, instead of the usual monetary rewards.

The path forward

The launch today is not the end of the story. Rather, it is the start of a long journey. Moving forward, besides the Contribution & Reward component, there are already a few enhancements and new features planned, including:

  • Support of non-neural network models. 

Currently, those federated aggregation algorithms implemented in the Federation and Federated Grid components mostly support deep neural networks. Nevertheless, there are still many commonly used machine learning models that are not neural networks based. 

In the next version, we plan to integrate outstanding research outcomes in the field of Federated Learning to support more aggregation algorithms, e.g. Federated GBDT (SimFL), etc.

  • Support of Vertical Federated Learning

The first post of this series discussed two common paradigms of Federated Learning: Horizontal Federated Learning and Vertical Federated Learning. 

Horizontal Federated Learning is useful in scenarios where different parties have a big overlap in the feature space (columns) but small overlap in the user space (rows).

Horizontal Federated Learning

Vertical Federated Learning is useful in the scenarios where different parties have a big overlap in the user space (rows), but a small overlap in the feature space (columns).

Vertical Federated Learning

In the current version, Synergos only supports Horizontal Federated Learning. We are also working on the support of Vertical Federated Learning. 

  • Integration with other compute and/or storage engines. 

Compute & Storage is an interface to compute and storage backends, which different parties use in local training. The current version of Synergos supports data that is managed by a local file system and S3-compatible storage, and the compute load is handled by a single node. 

We are actively working on support for other storage services and compute frameworks in the future versions, e.g., Spark, Horovod. 

  • Support of privacy-enhancing technologies (PET) 

In Federated Learning, what is exchanged among parties is mainly the intermediary model learnings like gradients and/or weights. This protects different parties’ local data since it does not require sharing of raw data. Nevertheless, exchanging of gradients could also lead to information leakage

In future versions, we will support application of privacy-enhancing technologies (PET) like homomorphic encryption (HE) or secure multi-party computation (SMPC) to better protect participating parties’ data.

In summary, some of the planned enhancements and new features are as follows:

Components

Status in current version

Enhancement/new features planned

Contribution & Reward

work-in-progress

Model reward, in which different parties will get a customised model of varying qualities based on their data contribution

Federation & Federated Grid

  • Support of neural network based models
  • Support of Horizontal Federated Learning
  • Support of non-neural network based models, e.g., GBDT. 
  • Support of Vertical Federated Learning

Compute & Storage

  • Data is managed by a local file system or mounted volume 
  • Compute load is handled by a single node

Support of other compute & storage engines, e.g., Spark, Horovod

Federation & Federated Grid

No privacy-enhancing technologies (PET) applied

Support of PET, like HE or SMPC, to better protect participating parties’ data 

Serving

Support only those parties who have contributed in the training to use the federated model 

Support of new parties’ requests to use the federated model. Those new parties did not participate in the federated training.

 

“Synergos” is a Greek word. The English word “synergy” was derived from “synergos”, which means “to work together” or “to cooperate”. We therefore also invite you to work together with us in this journey. 

Check out our code repositories at GitHub, start using it as a tool in your machine learning toolbox, and contribute your code to the platform. Synergos adopts a modular design. Different components are maintained in separated code repositories. Check out the key components like Synergos TTP and Synergos Worker. As a quick start, Synergos Simulator allows you to run all different configurations of Synergos in a sandboxed environment from your local computer. User guide is available here

Do share with us your feedback and suggestions on new features or areas we could improve. Please also join the discussion in our community group.

 


 

The Federated Learning Series

Continental Automotive Singapore Partners AISG to Develop Real-Time AI Solutions to Monitor Driver Heart Conditions and Intelligent Mobility Solutions to Boost Autonomous Vehicle Safety

Continental Automotive Singapore, part of the global leading automotive technology company Continental, and AI Singapore (AISG), Singapore’s national AI programme, today announced three multi-year research initiatives under AISG’s flagship programme, 100 Experiments (100E).  The multinational will conduct in-depth AI-focused research to find solutions to address driver monitoring and autonomous driving challenges. 

Through this partnership, Continental Automotive Singapore will become the first MNC in Singapore to have the three 100E projects under AISG’s AI Engineering Hub programme. 100E aims to solve organisations’ business challenges through the co-development of AI solutions and by helping organisations build their own AI teams.

The three ground-breaking AI research projects which will be spearheaded from Singapore are the following:

  • Project 1: Real-Time Monitoring of Cardiovascular Conditions in Drivers Using Artificial Intelligence
  • Project 2: Artificial Intelligence for Camera-Radar Sensor Fusion in Autonomous Driving
  • Project 3: Adversarial Attack & Defence Assessment for Autonomous Driving

The projects will be led by renowned principal investigators (PIs) from key institutes of higher learning with deep AI capabilities in Singapore namely Duke-NUS Medical School, NUS Advanced Robotics Centre, NUS and Fraunhofer Singapore. 

AISG will fund up to S$250,000 per 100E project for PIs from Singapore’s autonomous universities, A*STAR research institutes or other Singapore-based publicly funded research institutions to work on the organisation’s business challenge. The organisation is then required to match the funding in-kind through AI, engineering, IT and industry expertise and in cash.

AISG’s 100E programme has had a significant impact on the AI landscape since the programme was rolled out in 2018 -- to date, we have embarked on AI innovation projects with more than 70 companies. We are excited to collaborate with Continental Automotive Singapore on these three novel projects, translating R&D outcomes based on expertise in our national AI ecosystem. These projects use cutting edge AI technologies and I am confident that the joint research teams will deliver on their ambitious targets.

Professor Ho Teck Hua, Executive Chairman of AISG

Continental envisions to be an AI-empowered company. This cooperation in Singapore is a clear manifestation of its global AI strategy through strategic partnerships and collaboration in the local AI ecosystem. The collaboration with AISG targets AI technology development and application in concrete Continental specific use cases.

Continental Automotive Singapore, which has been operating in the island republic since the 1970s, has been focused on developing solutions for driverless mobility and next-generation urban mobility solutions. Its technology is represented through automotive head-up displays, cameras and sensors, among other technological innovations.

As a champion of visionary ideas for the automotive industry, we are very pleased to be the first MNC in Singapore to become an AI Engineering Hub under AISG’s flagship programme, 100E. The three research projects target to combine clinical knowledge and mobility technology to protect the health and safety of drivers, passengers, and road users. It is always our aspiration to use AI and future technologies to create a safe, efficient, and accessible environment for everyone on the road.

Lo Kien Foh, President & CEO, Continental Automotive Singapore

Pictures from the Media Briefing

A demonstration that is related to the 1st Project on using AI for monitoring cardiovascular conditions in drivers was also conducted during the Media Briefing. The demo car showcased Continental’s in-cabin sensing solutions in which the future outcome of Project 1 can be integrated. 

How Can I Help My Users Understand Computer Vision Output Images?

Whenever an image is passed through an AI model for the purposes of identifying and classifying objects, the resulting image may often look messy and difficult to interpret.

Computer Vision

Source: BitRefine group 

This was the case for PeekingDuck, one of our AI Makerspace Bricks. 

PeekingDuck is an open-source, modular framework in Python, built for Computer Vision (CV) inference.

To learn more about PeekingDuck, click here.

For PeekingDuck to be a success, it was essential for the end user to make sense of the CV’s output images and results. This includes photographs and video feed with detection elements and lines superimposed on top of them to highlight an identified object.

Our User Experience (UX) team sat down and scrutinised the output, coming up with UI improvements to help improve visibility. We made changes based off of design principles, to ensure that there was a basis to validate these improvements. Before we go into detail of these changes, here’s a few key basics you need to know.

 


 

The Basics 

After images/videos pass through the PeekingDuck’s machine learning algorithm (and most CV models), the output will look something like this:

 

You’ll notice 3 recurring elements : the Bounding Box, Key Points, and Edges. 

 

As shown in the illustration above, the Bounding Box is the rectangle encapsulating the object and the key points are dots help to outline the shape of the object. The lines connecting these dots give you the Edges.

 Now, let’s look at some of the key changes we made and insights we gained. 

 


 

Use brighter colours for bounding boxes, key points, and edges

It is not a secret that bright colours often capture attention. Hence, use colours in lighter tones and brighter hues for bounding boxes, edges, and key points.

We chose a colour palette with a bright yellow to represent the “duck” in PeekingDuck, and a bright red that matched the branding and colour scheme of AISG. 

PeekingDuck’s Colour Scheme

The yellow was used for bounding boxes and the red was used for key points. This replaced the original colours of neon purple and green which were harsh and reduced readability.

Old Colours (Left) vs. New Colours (Right)

 


 

Increase the borders and widths for bounding boxes, key points, and edges

This may be a no-brainer, but it’s important to include regardless. We found that thickening the borders of the bounding box and edges, as well as increasing the size of the key points, helped users to notice the detected object easier.

We made the decision to style all bounding boxes with the PeekingDuck yellow, a bright enough highlight, and increased the width of the borders from 1px to 3px.

 


 

Improve readability of text

Bounding boxes very often have text accompanying them to label the object within the box. The readability of text can be impacted by a number of factors. This accompanying text in this case is dependent on factors like the colours in the background behind it, as well as numerous possible objects in the image that will complicate the visibility by making the text hard to spot and read.

Instead of changing the text colour to something bright, we decided to add a brightly filled rectangle behind black text. 

Since the background image may be messy and jarring, the filled background will help users read the text better.

 


 

Improve visibility of resulting images

Because there is no way to know what kind of image is being passed through, the image might be packed with many objects, people, and movement which may make the resulting image output messy and difficult on the eyes. Compare the following 2 illustrations: 

It is clear that the bounding boxes in the second illustration stands out more than in the first. That’s because there are less objects in the picture, which leaves the main object that is to be detected, shown front-and-center in the foreground. It’s simple. Clean.

In reality, there’s no way to know what kind of image the user will upload, let alone how simple it would visually look. There’s a risk that the image may be packed with other objects, people, and movement, which results in the lines, dots, and boxes adding more “noise” to the output.

So it is critical for the user to be able to see the bounding boxes, edges, and key points easily. To achieve this, we needed to increase the contrast between these elements and the image itself. Because the elements were already brightly coloured, we tried several ways to manipulate the images to make them more “dull”, without losing any necessary detail. 

The two methods that gave the most significant results were reducing saturation and brightness.

Reduce Saturation 

By reducing the saturation of images, we hoped that the reduction in colour would result in any image becoming more monochrome, hence providing a more subtle background for the detection elements to sit on.

Relying on the Bounding Box as an example, we played around with different levels of saturation, but eventually settled on 50% so as to not compromise the integrity of the original colours.

It’s also important to note that this change would only be useful for when colour is not essential in the use case to which the lowering of saturation level is applied.

Reduce Brightness

Initially we thought that increasing the brightness of the image would result in a more visible contrast of the Bounding Box, as the general photography principle was increased brightness = increased quality.

However, we soon realized it was the opposite, and that reducing the brightness of images resulted in greater contrast of the Bounding Box.

 


 

Show only the necessary elements

To reduce the “noise” in the output image, it was better to show only the necessary elements instead of all 3 – bounding boxes, edges, and key points. We had to take into consideration each use case and their respective requirements and goals. 

For example, if the model is detecting whether people are keeping a safe distance from each other, displaying either bounding boxes or key points + edges would suffice.

As shown above, the output image on the right looks visually cleaner as it does not have the bounding boxes that are shown in the left. In future iterations, we plan to include features that would allow users to control what and how many detection elements they want appear.


 

Improve visibility of legends

The last element we wanted to improve on was the Legend that demarcated the zones within the image. The current legend was difficult to read, with neon text and no context. It looked out of place and went spotted, had poor readability.

So we decided to house the legend in its own container with an opaque or translucent background. This ensures the content within the legend is legible.

We also used coloured icons to indicate the zones, instead of using text colour as an indicator. This greatly improved the readability of the text in the legend, and the visibility of the entire element. 

 


 

Next Steps

As PeekingDuck continues to develop and release new features, our design team is committed to improving the user experience and ensuring that each workflow is easy and intuitive for our users.

Stay tuned for the next batch of design insights! 

  

Artificial Intelligence and Work: Two Perspectives

By Steven M. Miller and Thomas H. Davenport


One of the most important issues in contemporary societies is the impact of intelligent technologies on human work. For an empirical perspective on the issue, we recently completed 30 case studies of people collaborating with AI-enabled smart machines. 1 Twenty-four were from North America, mostly in the US. Six were from Southeast Asia, mostly in Singapore. We compare some of our observations to one of the broadest academic examinations of the issue. In particular, we focus on our case study observations with regard to key findings from the MIT Task Force on the Work of the Future report.

MIT established its Work of the Future Task Force in 2018 as an “institute-wide initiative to understand how emerging technologies are changing the nature of human work and the skills required—and how we can design and leverage technological innovations for the benefit of everyone in society.” 2 The task force focused on understanding the current and forthcoming impacts of advanced automation—in particular, artificial intelligence and robotics—on the nature of work, on productivity and jobs, and on labor markets and employment trends. While the MIT task force mostly focused on the situation in the US, their field studies also included visits to German factories, and they extensively reviewed research studies on the workforce and employment impacts of automation—with emphasis on impacts of AI and robotics—from all over the world. They published their final report in November 2020. 3

We address three major conclusions extracted from the MIT task force final report with our case study observations. In the first two areas, the MIT conclusions are entirely consistent with what we found. In the third area we observed some differences between the MIT study and our findings. We conclude with an issue that we did not address in our research, but which is important for business and governmental leaders to address. Quotations colored in blue are directly extracted from MIT Work of the Future task force reports.


Technology Is Not Replacing Human Labor Anytime Soon

The first of these addresses whether technology will replace human labor:

Technological change is simultaneously replacing existing work and creating new work. It is not eliminating work altogether.

No compelling historical or contemporary evidence suggests that technological advances are driving us toward a jobless future. On the contrary, we anticipate that in the next two decades, industrialized countries will have more job openings than workers to fill them, and that robotics and automation will play an increasingly crucial role in closing these gaps.

The MIT report acknowledges that intelligent machines thus far are capable of completing particular tasks, and not of performing entire business processes or even jobs. This reduces the likelihood that large-scale automation of human labor will take place. Indeed, in our case studies, almost all of the organizations involved said that AI and robotics had freed up workers to perform more complex tasks, but human workers had not lost jobs in significant numbers. Many of the companies we profiled were growing (in part because of their effective use of digital and AI technologies), so they needed all their human workers to keep up with growth. It is notable that the MIT task force predicts that over the next two decades, industrialized countries will have more job openings than workers to fill them, even with increasing workplace usage of AI and robotics and other types of technologies. In fact, in their 2019 Task Force report, they stated, “Contrary to the conventional narrative in which automation renders jobs increasingly scarce, we anticipate that, due to slowing labor force growth rates, rising ratios of retirees to workers, and increasingly restrictive immigration policies, over the next two decades industrialized countries will be grappling with more job openings than able-bodied adults to fill them.“ 4

Recent work by the economists Daron Acemoglu and Pascual Restrepo provides evidence that, “Indeed, automation technologies have made much greater inroads in countries with more rapidly aging populations,” and that “the adoption and development of these technologies are receiving a powerful boost from demographic changes throughout the world and especially from rapidly-aging countries such as Germany, Japan and South Korea.“ 5 These findings are directly applicable to Singapore given its population demography.

In 11 of the world’s 12 largest economies, World Bank statistics indicate that fertility rates (births per woman) were well below replacement levels and the proportion of the population age 65 and over was on an increasing trajectory. 6 The inevitable implication is that human labor will be in short supply. This is why the MIT task force report forecasts that neither the US nor the world at large is heading towards a future where there is not enough work for people to do as a result of greater usage of more sophisticated automation. More likely, in the decades to come, most of the world’s largest economies will make even greater usage of AI, robotics and other recent and pre-existing types of automation in order to keep their economic output from shrinking given their slowing or even declining labor force participation rates. The human labor that is available will be indispensable in making this transition.


Organizational Changes from AI Are Happening Gradually

The second conclusion sheds light on the confusing dichotomy between the rapid pace of AI technology development as viewed from R&D and tech start-up announcements and the much slower pace at which organizations are able to absorb and productively harness AI and robotic capabilities. It is described here:

Momentous impacts of technological change are unfolding gradually.

Spectacular advances in computing and communications, robotics, AI, and manufacturing processes are reshaping industries as diverse as insurance, retail, healthcare, manufacturing, and logistics and transportation. But we observe substantial time lags, often on the scale of decades, from the birth of an invention to its broad commercialization, assimilation into business processes, widespread adoption, and impacts on the workforce … Indeed, the most profound labor market effects of new technology that we found were less due to robotics and AI than to the continuing diffusion of decades-old (though much-improved) technologies of the Internet, mobile and cloud computing, and mobile phones. This timescale of change provides the opportunity to craft policies, develop skills, and foment investments to constructively shape the trajectory of change toward the greatest social and economic benefit.

Across our 30 case studies, we also observed that new AI-based systems, their supporting platform and infrastructure, and their surrounding work processes, do not materialise easily or quickly. It takes time for an organization to orchestrate the deep collaborations and complex deployment efforts across the ecosystem of job roles within the company that need to be involved, and also with key external partners (vendors, and sometimes customers). 7

Indeed, new AI developments are proceeding at breakneck speed, but bringing everything together across technology, people, and job roles in any real-world work setting is a very complex, time intensive and iterative undertaking that extends over longer time periods.

The MIT task force elaborated on this slow adaptation process:

As this report documents, the labor market impacts of technologies like AI and robotics are taking years to unfold … in each instance where the Task Force focused its expertise on specific technologies, we found technological change — while visible and auguring vast potential — moving less rapidly, and displacing fewer jobs, than portrayed in popular accounts. New technologies themselves are often astounding, but it can take decades from the birth of an invention to its commercialization, assimilation into business processes, standardization, widespread adoption, and broader impacts on the workforce. 8

The “Productivity J-Curve” phenomenon described by Professor Erik Brynjolfsson and his colleagues 9 provides an explanation for why the observed rate of AI and robotics assimilation within a specific company is a slow and gradual process. In their research brief prepared for the MIT task force, they described the productivity J-curve phenomenon as follows:

… new technologies take time to diffuse, to be implemented, and to reach their full economic potential. For a transformative new technology like AI, it is not enough to simply “pave the cow paths” by making existing systems better. Instead, productivity growth from new technologies depends on the invention and implementation of myriad complementary investments and adjustments. The result can be a productivity J-curve, where productivity initially falls, but then recovers as the gains from these intangible investments are harvested.

Productivity growth is the most important single driver of higher living standards, and technological progress is the primary engine of productivity growth. Thus, it is troubling that despite impressive advances in AI and digital technologies, measured productivity growth has slowed since 2005.

While there are many reasons for this, the most important is that technological advances typically don’t translate into improvements in productivity unless and until complementary innovations are developed. These include many intangible assets such as new business processes, business models, skills, techniques, and organizational cultures. The need for myriad complementary innovations is substantial, especially in the case of fundamental technology advancements such as AI. Yet, these complementary innovations can take years or even decades to create and implement; in the meantime, measured productivity growth can fall below trends as real resources are devoted to investments in these innovations. Eventually, productivity growth not only returns to normal but even exceeds its previous rates. This pattern is called a Productivity J-Curve.

Major improvements in productive capacity require substantial up-front as well as ongoing investments, not only in the direct software and hardware aspects of the technology itself, but in all of the support efforts (the complements and adjustments) required to harness the new capabilities. Of course, there can be situations where a company can use a cloud-based AI application or other forms of packaged AI applications that do not require deep levels of integration with its existing technical infrastructure or processes. In such cases, the time span required to realize benefits could be short, and there may not be much or any productivity J-curve effect. But such situations would have an inherently smaller degree of impact on the company’s productive capabilities exactly because there is no deep integration with or improvements to existing infrastructure and processes.

Each of our 30 case examples was a snapshot at a particular point in time, always after the company had fully deployed and mostly or fully stabilized the system and had already started to realize tangible improvements in productive capacity. Most of the examples we studied were the result of multi-year efforts started years before the moment in time of our case example “snapshot.” For example, while we were preparing our case study on AI-enabled financial transaction surveillance at DBS Bank, the company’s Chief Analytics Officer Sameer Gupta shared with us:

In my view, the reason this effort has been so successful is that it was not just been about analytics and AI. The team looked at how they run the entire function of transaction surveillance, transforming how they do this function end-to-end. This transformation has been supported, supplemented and augmented by analytics. But even with the best analytics models, had we not done all the other changes involved in this transformation, we would not have obtained the very impressive results that we ended up achieving. I see this as a successful business transformation that was augmented by analytics.

Sameer Gupta’s comment illustrates how AI system deployments require supporting implementation of many other types of business and organizational adjustments. In two of our case studies, large firms purchased a subsidiary to speed up their journey of capability development: MassMutual’s purchase of Haven Life for digital underwriting and Kroger’s purchase of 84.51 o for data science capabilities. Despite acquiring entire organizational units with strong capabilities for creating and using the AI-based systems, the two large parent firms still had to go through a multi-year process to integrate both the technical capabilities as well as the “way of working” capabilities of these newly acquired subsidiaries into their overall ecosystems.

There is no escaping the reality that it takes substantial effort over an extended period of time for a company to make the necessary complimentary investments and adjustments—above and beyond the direct investments and efforts required—to assimilate these new technologies in ways that lead to substantial increases in productivity. Senior management in both the private and public sectors overseeing investments in AI and other advanced automation projects need to understand and anticipate the extended time periods required for an organization to make the necessary complementary investments, innovations and adjustments in order to go beyond just deploying the technology. They also need to anticipate that in terms of measuring economic returns over time, they will often see a productivity J-curve effect.

But it can be worth the effort. Our 30 case examples provide examples of productive capacity improvements either in terms of task or process output capacity, quality, or a combination of both.


The Shape of the Future Workforce

The MIT task force addressed not only the issue of whether and when human work will be replaced by technology, but also addressed important aspects of the future workforce. One conclusion in this area highlighted the necessity of cultivating and refreshing worker skills. They stated:

Fostering opportunity and economic mobility necessitates cultivating and refreshing worker skills.

Enabling workers to remain productive in a continuously evolving workplace requires empowering them with excellent skills programs at all stages of life: in primary and secondary schools, in vocational and college programs, and in ongoing adult training programs.

We also found that frontline workers, in order to collaborate effectively with smart machines in their work, needed new skills. However, in contrast to the MIT report, we did not find that those skills had been acquired through “excellent skills programs” sponsored by schools, colleges, and employers. Instead, most of the new skills were acquired on the job, or by employees who were personally motivated to acquire new skills on their own.

Leading higher education institutions have already started to adopt new AI-related skills programs, but there are still many education institutions that have not done so yet. While some progressive employers have internally implemented AI-related skills programs, many have not. As such, the majority of existing employees in most countries are largely on their own to develop these skills. The situation in Singapore is an exception due to the SkillsFuture national initiative to provide continuing education for the existing workforce, and also due to the AI Singapore educational outreach programs.

The MIT report also does not emphasize the importance of hybridized business and IT skills that we found in many of our case studies. In the context of these 30 case examples, organizations had to deepen their internal capabilities in IT and expand into related areas for digital transformation and data science/AI. Frontline system users had to learn how to work with the systems. Supervisors and frontline managers had to work through the process changes and learn how to manage in the new setting. Technology staff had to hybridize their skills in the direction of business and domain understanding. Business users had to hybridize in the direction of technology capabilities and digital thinking and savviness. In addition, people needed to move into new types of roles which spanned and integrated business and technology (for example, product management, data governance, ethical AI practices).

While both self-motivate learning and IT/business hybridization are not easy to accomplish, they are relatively straightforward to do successfully for those in the workforce with the highest levels of education (undergraduate degrees and post-graduate degrees), and in fact, the MIT task force report shows that in recent decades, at least in US labor markets, those in the workforce with highest levels of education have mostly done well. 10

The MIT report emphasizes that augmentation is both a more desirable and more common outcome than large-scale automation. Augmentation is where employers create workplaces that combine smart machines with humans in close partnerships—symbiotically taking advantage of both human intelligence and machine intelligence. Most of our 30 case studies were examples of augmentation, and from what we observed, AI augmentation is largely quite successful. A few of our case studies involved some degree of full automation. Even for these few examples, there was still a need for augmentation in the sense that humans still have to supervise as well as support the continuous improvement of these fully automated tasks or processes, as well as handle special cases and disruptions.

The MIT task force effort included an imaginative and increasingly plausible view of how augmentation can be taken to even higher levels and expand into new types of applications. These ideas come from the task force research brief on “Artificial Intelligence and the Future of Work”. 11 The research brief authors Tom Malone, Daniela Rus and Robert Laubacher emphasize “thinking less about people OR computers and more about people AND computers.” They elaborated as follows:

By focusing on human-computer groups—superminds—we can move away from thinking of AI as a tool for replacing humans by automating tasks, to thinking of AI as a tool for augmenting humans by collaborating with them more effectively. As we’ve just seen, AI systems are better than humans at some tasks such as crunching numbers, finding patterns, and remembering information. Humans are better than AI systems at tasks that require general intelligence—including non-routine reasoning and defining abstractions—and interpersonal and physical skills that machines haven’t yet mastered. By working together, AI systems and humans can augment and complement each other’s skills.

The possibilities here go far beyond what most people usually think of when they hear a phrase like “putting humans in the loop.” Instead of AI technologies just being tools to augment individual humans, we believe that many of their most important uses will occur in the context of groups of humans. As the Internet has already demonstrated, another very important use of information technology—in addition to AI—will be providing hyperconnectivity: connecting people to other people, and often to computers, at much larger scales and in rich new ways that were never possible before.

That’s why we need to move from thinking about putting humans in the loop to putting computers in the group.

While we did not find an explicit objective to put computers in the group in the work settings we described, using technology to attain new levels of collective coordination and intelligence is not at all far-fetched. We already see this occurring to some extent in real-world situations in our Singapore LTA Smart City rail network management case study as well as in our Certis Jewel Changi Airport example (especially the smart operations center role). Both of these examples are in Singapore—an entire city-state economy and society making the future happen now. Over time, we expect to see more examples where smart-machine augmentation happens at the level of teams, departments, and entire business groups and organizations, and not just at the level of individual employees.


A Warning About Polarization of Labor Markets

Our research was case study-based, and did not address long-term economic and labor market issues. But the MIT Work of the Future task force analyzed US economy and labor market trends over prior decades up to the present, highlighting the stark realities of employment polarization and diverging job quality. They spotlighted the decline in the proportion of “middle-skill jobs” in the US labor market and the fact that wages for those in low-skilled occupations have stagnated for several decades. The task force explained the situation as follows 12 :

This ongoing process of machine substitution for routine human labor tends to increase the productivity of educated workers whose jobs rely on information, calculation, problem-solving, and communication — workers in medicine, marketing, design, and research, for example. It simultaneously displaces the middle-skill workers who in many cases provided these information-gathering, organizational, and calculation tasks. These include sales workers, office workers, administrative support workers, and assembly line production positions.

Ironically, digitalization has had the smallest impact on the tasks of workers in low-paid manual and service jobs, such as food service workers, cleaners, janitors, landscapers, security guards, home health aides, vehicle drivers, and numerous entertainment and recreation workers. Performing these jobs demands physical dexterity, visual recognition, face-to-face communications, and situational adaptability, which remain largely out of reach of current hardware and software but are readily accomplished by adults with modest levels of education. As middle-skill occupations have declined, manual and service occupations have become an increasingly central job category for those with high school or lower education. This polarization likely will not come to a halt any time soon.

The task force’s observation that US labor market employment polarization has been the status quo situation for over four decades now—and that it is more extreme in the US than in other advanced economies that have experienced positive productivity growth over past decades—led to their three additional conclusions:

Rising labor productivity has not translated into broad increases in incomes because labor market institutions and policies have fallen into disrepair.

Improving the quality of jobs requires innovation in labor market institutions.

Investing in innovation will drive new job creation, speed growth, and meet rising competitive challenges.

These three MIT task force conclusions address economywide issues that were beyond the scope of our more focused set of company-specific case studies. However, we feel these additional national policy oriented conclusions are important to highlight here for the following reasons. These conclusions, when combined with their other conclusions discussed above, set the stage for what is perhaps the strongest statement in their final report 13

Yet, if our research did not confirm the dystopian vision of robots ushering workers off of factory floors or artificial intelligence rendering superfluous human expertise and judgment, it did uncover something equally pernicious: Amidst a technological ecosystem delivering rising productivity, and an economy generating plenty of jobs (at least until the COVID-19 crisis), we found a labor market in which the fruits are so unequally distributed, so skewed toward the top, that the majority of workers have tasted only a tiny morsel of a vast harvest. 14

These conclusions are the foundations of important warning statements made by the MIT task force team that need to be heeded by senior managers, C-suite executives and board of director members in the private sector as well as by civil servants and elected government officials. Even though their statements are directly aimed at the situation in the US, the threats associated with excluding major segments of the workforce from sharing the fruits of productivity improvement and wealth creation apply to managers and government officials in all countries. The task force final report stated 15 :

Where innovation fails to drive opportunity, however, it generates a palpable fear of the future: the suspicion that technological progress will make the country wealthier while threatening livelihoods of many. This fear exacts a high price: political and regional divisions, distrust of institutions, and mistrust of innovation itself.

The last four decades of economic history give credence to that fear. The central challenge ahead, indeed the work of the future, is to advance labor market opportunity to meet, complement, and shape technological innovations. This drive will require innovating in our labor market institutions by modernizing the laws, policies, norms, organizations, and enterprises that set the “rules of the game.”


Conclusion

For our forthcoming book, we focused on case studies of people collaborating with smart machines that were examples of successful deployment and usage of AI systems in work settings. We interviewed people who were gainfully employed, highly engaged with all of the technology and process changes that had taken place in their work setting, and, for the most part, enthusiastic about working with or managing the new AI-enabled systems in their workplace. The strength of our case studies is that they provide “real-world” examples in actual operational every-day work settings of what it is possible to achieve in terms of people collaborating with smart machines in ways that improve business capabilities.

At the same time, as AI and other forms of advanced automation continue to diffuse across an entire economy, there are other aspects of the story. The MIT Work of the Future task effort provides a broad view of these changes. It focuses on the multiple sides of this unfolding journey from an economywide and labor market perspective.




  1. Thomas H. Davenport and Steven M. Miller. The Future of Work Now: People Collaborating with Smart Machines. Forthcoming in 2022 from MIT Press.
  2. This description of the purpose of the MIT Future of Work Task Force is stated on their website homepage at https://workofthefuture.mit.edu/.
  3. David Autor, David Mindell, and Elisabeth Reynolds, “The Work of the Future: Building Better Jobs in an Age of Intelligent Machines,” report published by the MIT Task Force on the Work of the Future, November 2020. We alter the order of presenting the six main conclusions of the MIT task force report.
  4. David Autor, David Mindell, and Elisabeth Reynolds, “The Work of the Future: Shaping Technology and Institutions,” report published by the MIT Work of the Future Task Force, November 2019.
  5. Daron Acemoglu and Pascual Restrepo, “Demographics and Automation”, January 2021. Forthcoming in Review of Economic Studies.
  6. See the World Bank Open Data website at https://data.worldbank.org/. According to their most recent data on GDP in current US dollars, the world’s 12 largest economies were US, China, Japan, Germany, India, UK, France, Italy, Brazil, Canada, Russian Republic and Korea Republic (S. Korea). Statistics on fertility rate (births per woman) and population ages 65 and above (% of total) are available through this website. The only one country of the 12 largest economies where the fertility rate was not well below replacement level was India, where it was 2.2 births per woman, and declining.
  7. Thomas H. Davenport and Steven M. Miller, “Working with Smart Machines,” Asian Management Insights magazine, Vol 8 (1), May 2021, Singapore Management University. https://cmp.smu.edu.sg/sites/cmp.smu.edu.sg/files/pdf/5_AMI15_SmartMachine.pdf
  8. Autor, Mindell and Reynolds (2020).
  9. Erik Brynjolfsson, Seth Benzell, and Daniel Rock, “Understanding and Addressing the Modern Productivity Paradox” research brief published by the MIT Work of the Future Task Force, November 2020. A more in-depth analysis and explanation is given in Erik Brynjolfsson, Daniel Rock, and Chad Syverson, “The Productivity J-Curve: How Intangibles Complement General Purpose Technologies,” American Economic Journal: Macroeconomics, Vol 13 (1), January 2021.
  10. Autor, Mindell and Reynolds (2020), Section 2, Labor Markets and Growth; and Autor, Mindell and Reynolds (2019), Section 2, The Paradox of the Present, Section 3, Technology and Work: A Fraught History, and Section 4, Is This Time Different?
  11. Thomas W. Malone, Daniela Rus, Robert Laubacher, “Artificial Intelligence and the Future of Work” research brief published by the MIT Task Force on Work of the Future, December 2020.
  12. Autor, Mindel and Reynolds (2020), Section 2.3, Employment Polarization and Diverging Job Quality.
  13. Autor, Mindell, and Reynolds (2020), Introduction.
  14. Autor, Mindell and Reynolds (2020) go on to explain in their introduction, “Four decades ago, for most U.S. workers, the trajectory of productivity growth diverged from the trajectory of wage growth. This decoupling had baleful economic and social consequences: low paid, insecure jobs held by non-college workers; low participation rates in the labor force; weak upward mobility across generations; and festering earnings and employment disparities among races that have not substantially improved in decades. While new technologies have contributed to these poor results, these outcomes were not an inevitable consequence of technological change, nor of globalization, nor of market forces. Similar pressures from digitalization and globalization affected most industrialized countries, and yet their labor markets fared better.”
  15. Autor, Mindell, and Reynolds (2020), Introduction.


Building Platforms and Products at AI Singapore

As one of the original four engineers at AI Singapore, Najib Ninaba has left a deep imprint on the way things are run here. His current role in leading Platforms and Product Engineering continues to provide ample opportunities to define the way projects are delivered in the future. All this and more in my conversation with him.


Below is a transcript of the conversation [*].



Hi Naj, great to have you with us today.

Hi Basil. Good to be here.

Naj, you play a very important role here in AI Singapore, but before we come to that, could you share with us your personal journey in the earlier part of your career?

Sure. After I finished my full-time national service, I joined a Linux company. I was always interested in doing Linux, even back in secondary school. I came across this book about the X Window system. Back then we were using Windows 3.1. I didn’t really like the way it looked and the way Microsoft was doing things, so I started to dabble in doing Linux. That was how I got into doing it all the way into my polytechnic. I felt quite confident doing Linux systems.

There was an opportunity after national service. I saw this Linux company in Singapore called eLinux Systems and I went in for an interview. Laurence Liew, who is our Director for AI Innovation in AI Singapore was the hiring manager and he found that I was suited for a junior role and I joined as a junior system engineer. It turned out that eLinux Systems wasn’t just doing Linux systems for the office backend and things like that. Laurence was deep into high performance computing (HPC) in Singapore and he threw me into the deep end of doing high performance computing and we went on to build some of the first generation clusters in Singapore. This was way back in early 2000 – 2001. So, some of the first-generation high performance clusters in NUS, NTU and several other institutes might have been built on those clusters.

Back in 2000 – 2001, the dot.com bubble burst and eLinux Systems was disbanded, but Laurence still valued the engineering team that was part of eLinux, so he brought us into Singapore Computer Systems (SCS), and still doing HPC contracts and things like that. Doing HPC was similar to how cloud systems are right now. We were managing a lot of systems, clusters and racks of systems working as one. The setting up back then was almost unmanageable I would say, because you are talking about one rack containing like thirty servers. I actually had to bring a boot disk server by server and it got very tiring and boring.

I went around looking for an open source Linux toolkit to manage high performance clusters and I came across this HPC provisioning toolkit called Rocks which came out of the San Diego Supercomputer Center (SDSC) and I played around with it and it worked really well. That was one of my first forays into becoming an open source project committer. We found the toolkit was lacking certain components, particularly the packaging of a HPC job scheduler, and Laurence encouraged me to contribute this work back to the SDSC team. They liked it well enough to get me to be part of their so-called core committer group. This was just three guys in San Diego Computer Center plus myself. They had thought that somebody out of California would join them, but never did they expect somebody from Singapore, from Bedok to join and become a core committer, so that was fun. That became part of our software stack. It really drilled into me the importance of having a cohesive platform stack to be able to deploy such systems.

In Singapore, we began to do more and more of these HPC projects and within SCS there was an internal competition and we won an innovation award and we got some money. With that money, Laurence and myself stepped out of SCS and co-founded a start-up based on high performance computing called Scalable Systems. You see elements of it even now at AI Singapore where the software stack is built on top of open source and we deliver value on top of it. This was like a true start-up adventure – late nights of coding, working with a small engineering team. Myself, I had to do both project management and engineering management. We continued to work very closely with the San Diego Supercomputer Center folks. We went to the US several times, we went for a supercomputing conference, a big HPC conference back then. We even had a US presence there and the SDSC team supported us really really well.

By 2006, we caught the eye of another HPC focus group that came out of Platform Computing based in North America. They made an offer to acquire us. After looking into what the acquisition really meant for us, talking to our SDSC collaborators, we made the move. That was my first understanding of what it means for a start-up to be acquired. Leading to that acquisition, Laurence and I went up to Toronto in Canada and we met with the CEO and CTO and talked about things we would be doing. I become like an engineering manager within Platform Computing. So that was my first foray into engineering management.

Platform Computing back then was pretty big. It had offices in Europe as well in Asia. Within Asia they had development teams in Beijing and Shanghai and they were looking at us in Singapore to be like the anchor point for businesses around Asia. They gave us a budget and we built up a very strong engineering team. One of the interesting things was how we actually interviewed people into the engineering team. We set up a half-day workshop and got people to come in and work with the team. That became the blueprint for how we are doing the AI Singapore AI Apprenticeship Programme where we invite interested folks to join in. So that blueprint came from way back then.

I left two years before Platform Computing got acquired by IBM. If I had stayed, it would have been the second time I experienced an acquisition. But before I left I made sure the team leads I was mentoring could step up and deliver. So that was also another thing that we’ve brought into AI Singapore. We understand the strong need to have really good mentorship. Even down to the junior engineers, the strong sense of mentoring, being team leads. The engineering team acquired by IBM, many moved on to some of the bigger tech companies out there – Red Hat, Google and HP.

I went out and did my own freelance consulting. Laurence stayed on with IBM and after a while he reached out to me again. He said, hey let’s do another start-up and we went on to do a data science and analytics company called Revolution Analytics Singapore. There was a Revolution Analytics in the US and they wanted to set up an office here. So, we tried to replicate the process, what we did in Platform Engineering, building up the engineering within Revolution Analytics. That was where we worked with folks like William (Tjhi), a (team) head within AI Singapore.

Again, Revolution Analytics got acquired by Microsoft to be their advanced analytics arm. The team I was managing wasn’t as big as in Platform Computing, but it was fun and very talented. This was back in 2012, 2014. It was then I realised that data science and big data was on the rise. I knew there was something going on, so much momentum going on there. So, it was a short stint. I was there for about 2012 to 2014, two years and after that, I stepped out to do my own consulting on side. I also did some freelance work with the San Diego folks I used to work.

Again, in 2016 Laurence reached out to me and told me he was stepping out of Revolution Analytics. At that point, he also realised that data science and big data and AI were starting to be something quite important here in Singapore. So we co-founded a new start-up called Real Analytics, focusing on data analytics and data engineering. We are really running lean – just a two-man operation, myself and Laurence. We did a lot of consulting work, all the way to even in Malaysia, where Microsoft approached us to conduct training on their big data stack for Telekom Malaysia. So we actually spent some time in Malaysia doing consulting work.

Around 2017, we got a training contract with NUS SCALE (School of Continuing and Lifelong Education). They got us on board to deliver a training on data analytics and data engineering and I taught a few courses around reproducible data science, data engineering, chatbots and it was pretty fun. At that time, Laurence also put in motion the seeds of AI Singapore. AI Singapore, as you know, got started in 2017. Laurence reached out to me and I came on board in January 2018.

A very exciting journey spanning both hardware and software, across Asia and North America, working in both start-up culture as well as in big organisations, in engineering and in management, delivery code and delivering training. So, with this vast experience that you are bringing into this current chapter here in AI Singapore, I think it’s going to be equally exciting. I had the good fortune of joining AI Singapore late in 2018. What was it like, even before that?

Right, so I joined in January 2018. Then it was lean. I was one of the original four engineers within AI Singapore. We started with four engineers – myself, William, Jean and Maurice. And Ken, the originator for TagUI, joined soon after. It was pretty challenging. Doing AI was new to our customers and collaborators. For us, the engineering team, the tooling was still very rough, I must say. The machine learning problem statements we saw back then were, to me, still data science in disguise, not so much on machine learning, but we did have some project sponsors with really innovative use cases.

We were four engineers, Ken was focusing on TagUI, and Laurence realised something very early where to work on one hundred projects, we really needed good AI-trained engineers, especially Singaporean engineers because some of the project sponsors require them. If we could not find them, we would go and groom and train these folks. So that was how AIAP came about. And because we had experience building engineering teams, we brought those principles in. Like how to how to attract, identify and hire a talent.

One of the things I want to highlight is that back in the days, AI Singapore wasn’t where it is now. Now we are situated in innovation 4.0, but before that we were in NUS UTown in the same building as where NRF was. Then when innovation 4.0 was ready we moved there and we really then had the space to get apprentices in, to house them and train with them. When we did the first batch, it was only thirteen folks, if I’m not wrong. This was a learning experience on both side, both for the AI mentors, which were the four of us and the AIAP folks who joined. They sat with us within our office and we had to juggle both mentoring the folks solving the problem statements, as well as continuing innovating on our things, so that was really challenging, but it went well and we had some really good and supportive project sponsors with us on this journey. I can say that we really started to scale once we started doing AIAP batches 2 and 3. Basil, you came out from batch 2, right? I’m sure you can appreciate how back then we were a bit rough. I think only after some of the batch 2 folks started to join us, that was really when there was some kind of momentum and we went very fast to where we’re at now. So, I think that was quite amazing, we had that thing going on for us.

I want to highlight, the infrastructure used to support this. Back then when it was just the four of us, it was really out of our laptops. We did have some workstations and servers, but we pretty much utilised everything under Microsoft Azure as our cloud infra. But we realised, doing deep learning, that it was not going to be very cost-effective to continue doing this. That was when we went ahead and built up our own on-premise infrastructure. Again, building on my past experience of building HPC clusters, we put up a tender and, together with my platforms team, we set up this AI HPC cluster that is running even now. And then we actually have a mix of infrastructure, both on the cloud, as well as on-premise. We commissioned that cluster back in 2019.

Yeah, as you mentioned, I was part of the journey from batch 2 onwards, so I certainly witnessed this taking-off phase. I was aware of some of the challenges that came along. Could you maybe go a little deeper and share with our listeners?

Sure. Some things never change. Back then, some of the 100E problem statements tended to be more like data science, some of them going towards deep learning, doing CV applications .. but the lack of data back then and even now is still a problem. Project sponsors coming in wanting to do machine learning projects with us, but then they realised that – and we realised that – you know, there wasn’t enough data. That was a challenge. I think often times our project teams had to wait or simulate or build up their own data set until the project sponsor provided that data set. More often it was really the lack of data stopping us. So that was one big challenge.

Another thing is, as we moved from batch to batch and more sophisticated problem statements came about, we started going into more deep learning projects. They require more resources around GPUs. This was what led to building up our on-premise AI cluster to support. So, even now where we actually have quite a fair bit of on-premise infrastructure to do this type of things, we are still having not enough accelerators to run this type of projects, so that’s something on the roadmap. We actually want to acquire more stuff, hardware accelerators to run these types of projects.

We also see that the old ways of doing, at least from a tech stack, is evolving. Back then, you had data sitting in a server, your typical big data setup. But for AI you need really lots of data and processing power to develop your end models. So, the old ways of doing clusters and doing servers, I won’t say not relevant, but there are more ways of doing infrastructure that have come up. Particularly around Kubernetes, containers and container orchestration. So there’re a lot of these modern infrastructure. In fact, there’re too much of these infrastructure tooling now, you’re spoilt for choice. This is where over at our platforms team, we come up with our own stack and we regularly get consulted on the infrastructure and deployment stuff for each of these projects. Often times, we say, okay go for this type of default stack and go for this type of well-proven techniques of deploying your models …

They are becoming more informed …

Yes, project sponsors these days are getting more cloud infra-savvy also, which is a good thing. If you remember back then when I set up the cluster, it was by hand … but now, with the advent of cloud infrastructure, it can also be a bit of challenge. Your data is here, your compute is there, how do you manage all this sort of things, but now we have all this modern infra tooling, it makes things a little easier.

As an insider, I’m also aware of some of the in-house platforms that we are building – like Kapitan Workspaces, Kapitan Scout and Kapitan Intuition. What are the problems that they are intending to solve?

As we worked with several project teams, we found that it was getting a bit more challenging to support because some of them were running different sets of tooling for different projects and when we had the AIAP, we don’t provide laptops so they had to come with their own – bring your own device – but they still access our infrastructure, whether on-premise or on the cloud. We recognised the fact that there needs to be consistent tooling and interface to make sure they are able to work on their project statements regardless of the state of their hardware – no hardware discrimination. The thought of having a consistent development environment actually spoke to us really really well. Back then, doing HPC, when researchers worked on these clusters, what they typically did was to SSH in onto the server, then they had their own directory where they were able to install stuff. So, we brought that idea in and led to what we call Kapitan Workspaces, where we provide consistent development environments to the users and all they need to do is, first access our infrastructure and beyond that they just need a SSH client as well as a modern browser. Once you have those, they are able to just log in, spin up the Kapitan Workspaces and they are able to provision Jupyter notebook or JupyterLab. They are able to do visual code in the browser and do their work. All of this infrastructure is being backed by our powerful AI cluster. So, even with a Chromebook, so long you have a modern browser, so long you have SSH, so long you have the VPN to access our stuff, you are good to go. And we took it a step further, at least for VS code users. They have this plugin called SSH remote. They can still continue to use their visual code editor, but they are able to just SSH into our infrastructure and leverage our powerful infrastructure backend, but still utilising their local setup. That’s basically Workspaces, providing a consistent tooling and access to our cluster.

For Kapitan Scout, it came about when we saw that between different projects, there were some issues in terms of how we go about deploying the models. As you know, our AIAPs come from various backgrounds. Some may not necessarily have a computer science background or computer engineering background, but they are really strong in machine learning, statistics .. and then we have on the other side, folks really strong in computer science but not in statistics… Kapitan Scout is basically our stack for .. once your model has been trained, bring it into Kapitan Scout which is powered by technologies such as Seldon Core, Grafana, Prometheus … it provides a consistent interface to bring your model in, like a consistent API endpoint, where you can also check the health of that model, you can also do some deployment strategies like A/B testing, multi-armed bandits scenarios in a consistent manner. So long you have the necessary CI/CD snippets within your tooling of choice, once you have copied that snippet in and then you put your models in a well known manner, the backend will call Scout to package the model as a Docker container and deploy into an API endpoint and then you are able to monitor the metrics, not just model metrics, but also the operational metrics like how many requests are coming in. That’s what Scout aims to solve, to help our project teams deploy their models more easily and robustly.

Kapitan Intuition is an internal tooling more for our AI infrastructure, because we have a lot of infrastructure – we have Microsoft Azure, our on-premise infra – we needed a way to monitor all of them in a consistent manner and do things like – because we are AI Singapore, so we want to be able to use AI to help manage and operate our stuff – Intuition aims to solve the problem where we start to do things around predictive analytics, around which servers are going down. Before they go down, can you give me a window based on the past historical context, what services are likely to go . So, that’s what Intuition aims to do …

Like a predictive maintenance problem …

Right, we want to evolve to a point where we can do preventive maintenance, where before the things even go down, highlight the operators … so Intuition right now is being built for our internal infrastructure, but we’re also building our own data centre together with NUS and NSCC, so we are hoping to actually bring Kapitan Intuition to manage from a data centre context, so that’s gonna be pretty exciting. Definitely, we’re looking for folks to help us along in the journey.

Wow, so many things mentioned, but I think we still have one more, which is Brickworks Gallery. This one I think is pretty exciting, at least to me, because it talks about the actual projects that we have done. Could you tell us what is it about?

Sure. Brickworks Gallery came out from my product engineering team – I actually manage two things, one is platforms engineering and the other one is product engineering. Product engineering is really meant to build tooling to enable and accelerate the building of machine learning apps. We come out with best practices and various toolkits. One of the first projects that came out of this product engineering group is Brickworks Gallery. This came about because we have up to now seven batches of AIAP already and each batch worked on a set of 100E and internal projects. The AI engineer mentors who are attached move on from batch to batch, from project to project. We realised that there is knowledge that is not stored as they moved from project to project – the papers they worked on, the techniques, the algorithms, the demos – some of them we are recording because we have a great project management where the assets of that project are being recorded, so we know what they are, but from an engineering team point of view there isn’t an easy way to figure out, hey have we done something around CV, doing object recognition .. or have we done things around predictive maintenance .. previously these folks had to go ask around who did that in that project, and often is like on memory and you can forget about certain things, so we realised that’s a gap. What we did was we created an internal portal where we extract certain project information that engineers can see, and then we also provide an interface where the mentors themselves can update, like the citation references, what papers they have read, what techniques they have used, and pointers to the GitLab repos of the project, demos and presentations and marketing they have done in a systematic manner.

So, it’s like a knowledge capture kind of tool, right?

It’s a knowledge base essentially. We have basically catalogued every 100E project from batch one all the way to now and we have automated some of the processes. Once a project has ended, there is a backend process that extracts information back into the Brickworks Gallery and we just need the mentors to come in and fill in some missing gasps, but even that we are looking into ways to automate that in the future. So, it’s quite a cool project and something that I believe is quite appreciated within the engineering team.

Ya, considering the number of projects that we have already delivered, this certainly justifies building up such a gallery of projects for reference. Well, at the risk of sounding our own trumpet at AI Singapore, I think this is all very great work and what else can we expect in the future?

I think there’s gonna be a lot more focus on MLOps and AI ethics. That’s gonna be a lot more things now, so we will be infusing a lot more MLOps engineering within our 100E projects. What that means is basically we want to engage our project sponsors earlier to figure out, do they actually have the resources once the project is ended and they have taken over, make sure they have the necessary systems in place, and if not then, that’s when we can intervene and help them with that. And we are also doing new initiatives. Some of us, including myself, are part of the AI technical committee where we are driving new standards. A lot of this is driven by what we’ve seen so far in the past three years of doing 100E projects. We’ve seen a lot of problem statements, we’ve seen the challenges, so I’m hoping to bring that in a more consistent manner, at least from the AI committee as a whole they are able to benefit.

We are also doing something that’s actually quite new, that’s coming out of product engineering, where we are also focusing on user experience best practices for machine learning. What that basically means is, we take a look at common things, like how do you do model training, how do you do data preparation in the workflow. We have various tooling, by open source projects, by commercial vendors, but when you start building a machine learning infused application, you often need to build your own UI. But as I mentioned, our engineers and apprentices come from diverse backgrounds, so they may not have the necessary grounding in best practices around UX, so we are coming out with things like Brickworks Facade. Basically, it is a set of best practices and principles around UI design, around user experience for things like how do you do model training, how do you do model deployment, to make sure that the applications they deliver follow this type of human-first best practices in user experience.

In terms of platforms, as I mentioned, we are building our own data centre together with NSCC and NUS. We are getting more diverse hardware, more AI accelerators, so we will be beefing up our number of GPU counts to the delight of our project teams. We are also looking into other AI accelerators like TPUs, FPGAs, all the new processors coming out from Nvidia and other folks, Intel and AMD. We’ll be getting more of those types of hardware. We also have expanded our cloud infrastructure. Before that, we were using Microsoft Azure, and we are continuing to using Microsoft Azure, but now we also use GCP – Google Cloud Platform – so we now have two cloud infrastructures for our users. Two cloud infrastructures and expanded on-premise infrastructure are what the engineering team and our AI Singapore collaborators can expect.

So, from the experience that we have gained over the last three years or so working with different industry verticals on real-world problems, you’ve gotten to understand the actual needs when it comes to implementation and deployment – what works well, what doesn’t – and so we have responded with solutions to meet those needs. Thanks a lot for today’s sharing.

No problem. Happy to be here.




[*] This conversation was transcribed using Speech Lab. The transcript has been edited for length and clarity.

Related Story

mailing list sign up

Mailing List Sign Up C360