Teaching AI to think like humans
Anticipating how others will act is a critical human skill that researchers are working to transfer to robots.
Communication—what information to convey, when, and how to do so—is a key skill for robots to have if they are to effectively assist humans in their tasks.
Neural networks are complex black-box models. There are always questions like “How does it all work? Can we trust them?” Seeking the answers to those questions here.
Machine learning models are vulnerable to manipulation. Regardless of the source of bad data, adversarial machine learning usually results in the same outcome: a decline in prediction performance or in some cases, serious consequences.
What’s the secret behind an AI model churning out good predictions (anything from figuring out the clothes you like to whether you’d make a good mortgage loan applicant)? Having good-quality data. That’s easier said than done, however, as datasets tend to be inherently noisy.
Don’t be fooled by ChatGPT’s voracious appetite for data, warns A*STAR’s Zhang Mengmi. Such AI models are prone to ‘catastrophic forgetting’ — remembering only what’s currently being taught (Biden yes, Trump who?).
Arabic saying (circa 1970): “Four things come not back — the spoken word, the sped arrow, the past life, and the neglected opportunity”. Fast forward to the 21st century and add this to the list: personal data. However, researchers like NTU’s Dr Yu Han are working to build AI models that preserve your privacy.
AI systems are getting smarter by the day. But can they make us humans more intelligent too? SMU’s Pradeep Varakantham thinks so — find out how he’s combining AI and reinforcement learning to generate personalised training systems in our latest feature.