Get SGD58.85 off your premium account! Valid till 9 August 2021. Use the Code ‘SGLEARN2021’ upon checkout. Click Here

PeekingDuck V1.2 – A Major Update!

The PeekingDuck team is excited to share our latest v1.2 release! Our model zoo has been expanded with new object tracking, crowd counting, object detection, and pose estimation model nodes. We have also revamped our documentation by making information easier to find, and adding tutorials to guide users. Useful nodes such as for aggregating statistics or image augmentation have also been introduced. Lastly, to accentuate the value of using PeekingDuck, we have created a tagline for PeekingDuck: low-code, flexible, extensible.

PeekingDuck: Low-code, Flexible, Extensible

New Models

PeekingDuck currently offers different object detection and pose estimation model nodes. In this release, we are introducing two new categories of models to PeekingDuck – object tracking and crowd counting. 

What is object tracking? In object detection, the model is unable to tell that a person detected in a video frame is the same person in the next video frame. Object tracking solves this limitation by assigning unique IDs to each initial detection, and tracking these detections as they move around in subsequent video frames. This is demonstrated in the GIF below, where the IDs above each person are consistent over time – hence each object is being “tracked”. Our model.jde and model.fairmot nodes are able to track people, allowing the total number of unique passers-by to be counted over time. This helps reduce dependency on manual counting and can be applied to areas such as retail analytics, queue management, or occupancy monitoring.

People tracking

As mentioned earlier, low-code is one of PeekingDuck’s strengths – this is achieved by PeekingDuck’s modular system of nodes. Different use cases can be tackled simply by specifying the required nodes in a config file, within just a few lines! The scenario shown in the above GIF was the result of using the config file below, where input.visual was used to read from the source video, model.jde performed object tracking, dabble.statistics counted the total number of people, and the other nodes drew and output the results on the screen. More details about how to use PeekingDuck and its nodes can be found in the links at the end of this article.

Config file for counting people over time

We have also created a flexible dabble.tracking node that uses heuristics for tracking. It is not limited to tracking people – when paired with our object detection nodes, it can track one of 80 different types of objects including vehicles or animals. When applied to vehicles in the GIF below, it can count the total number of unique vehicles passing by in a period of time, aiding transportation planning by identifying periods of peak traffic.

Vehicle tracking
Vehicle tracking config file

Crowd counting is the other new category of models in this release, specifically targeted for counting large numbers of people in the hundreds or thousands. Object detection models work less well in such scenarios due to occlusion, where people are blocking others. Our model.csrnet node uses a different density-based approach that circumvents this problem, giving a much better estimate of a crowd size.

Crowd counting
Crowd counting config file

Lastly, we have bolstered the ranks of our object detection and pose estimation models with improved and recent models. The anchor-free model.yolox object detector has almost twice the accuracy of our existing model.yolo when comparing the “tiny” model variants, without much compromise on the FPS. As for pose estimation, our model.movenet offers significant performance improvements over the existing model.posenet, and has more accurate predictions for exercise use cases such as yoga. 

Improved Documentation

Over the last few months, we have been conducting user interviews with a number of PeekingDuck users (thank you for your support!). We have revamped our Read the Docs page using the feedback gathered – making it much easier to find information, providing better guidance to our users, and aesthetically improving the website. Tutorials are now available for perusal, from the basics in the “Hello Computer Vision” tutorial, intermediate recipes in “Duck Confit”, to the advanced “Peaking Duck”, with examples such as interfacing with an SQLite database or aggregating statistics for analytics.

Screenshot of Revamped Documentation

A glossary page has been added for easy reference to PeekingDuck built-in data types, as well as a FAQ and troubleshooting section. Additionally, detailed installation instructions for PeekingDuck on Apple Silicon (M1 Macs) for MacOS Big Sur and Monterey are now provided. 

Additional Notable Features

Here is a high level list of other notable features:

  • Support for Python 3.9 has been added
  • A 6th category of node (augment) has been created for image augmentation. New nodes augment.brightness and augment.contrast are included in this release, and more will be added in future
  • A new input.visual node replaces both input.live and input.recorded, as a single node can achieve the functionality of reading from an image, video, CCTV or webcam live feed
  • A new dabble.statistics node can calculate the cumulative average, maximum, and minimum of a single target variable of interest over time, to be used for analytics
  • The existing draw.legend and draw.tag nodes have been refactored to allow greater flexibility in drawing different data types

Find Out More

To start using PeekingDuck and find out more about our updated features, check out our documentation below:

You are also welcome to join discussions and reach out to our team in our Community page (https://community.aisingapore.org/groups/computer-vision/forum/).

Authors

  • I'm the lead engineer for Computer Vision in AI Singapore. I like designing things, building LEGO, and writing code in my spare time.

Share the post!

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp
Email
Print

Related Posts

mailing list sign up

Mailing List Sign Up C360