Indexing and retrieval of multi-viewpoint surveillance videos

Project Reference :


Institution :

National University of Singapore (NUS)

Principal Investigator :

Professor Roger Zimmermann

Technology Readiness :

6 (Technology demonstrated in relevant environment)

Technology Categories :

AI- Machine Learning - Information Retrieval

Background/Problem Statement

Modern video surveillance systems often consist of multiple cameras capturing continuous videos. However, existing systems do not fully take advantage of shared scene semantics across multiple video streams, and hence querying the captured videos to find an event or object of interest can be a time-consuming task.


A system consisting of one or few light-weight commercial grade GPUs that helps to index surveillance videos into a reduced data representation so that users can do a simple natural language query with a productise-able level of relevance. 

The proposed solution is a compact representation of objects and their relationships in the scene under surveillance, over time, and across multiple cameras, using a combined global spatio-temporal scene graph. The same objects that appear in multiple cameras are stored only once, reducing the storage required and speeding up the query when compared to storing the scene graph from each camera individually.

Link to publication:


  • Experiments show that in a 5-camera system, the system can speed up querying time by a factor of 3.9 times
  • Optimised cross-camera tracking algorithms minimise the amount of data sent to the server and the computation needed
  • More than 99% savings in bandwidth and 50% savings in processing time with little performance loss

Potential Application(s)

All industries that use video management systems and/or surveillance-related hardware and services, e.g., banking, building security, etc.

We welcome interest from the industry for collaboration/ co-development / customisation of the technology into a new product or service. If you have any enquiries or are keen to collaborate, please contact us.