Our ability to anticipate the behaviour of others comes naturally to us. For example, an experienced driver can often predict the behaviour of other road users. This phenomenon is called Action Anticipation, the ability to recognise actions of others before it happens in the immediate future.
This is so natural to us but how to develop a computational approach to do the same remains a challenge. It is critical to transfer this ability to computers so that robots may be able to react quickly by anticipating human actions like humans. Robots’ ability to understand what humans might do in the immediate future is important for the development of assistive robotics in domains such as manufacturing and healthcare. The objective of this project is to investigate a novel approach to anticipate human actions, specifically one to five seconds before action happens using visual information in human-robot engagement scenarios.