discussion / Acoustics  / 19 August 2021

Deep learning module for PAMGuard

Hi all

We've recently been developing a new deep learning module for PAMGuard which will be released soon.

For those of you unfamiliar with PAMGuard, it's an open source no-code toolbox for analysis of large acoustic datasets, with lots of different signal processing algorithms, automated detectors/classifiers, soundscape analysis, interactive displays and a comprehensive data management system. The new deep learning module allows users to import  models trained using ANIMAL-SPOT or Ketos libraries for use in real time or post processing acoustic workflows (there's also a manual options to load bespoke models). The idea is to make it much easier for folk to deploy deep learning models for processing data which will hopefully increase the adoption of AI classification methods within the acoustic community.

I've written a blog post with more details here including a link to a beta version of the next PAMGuard release for folk who want to give the new module a spin. There are comprehensive tutorials here with right whale and bat call detection examples.

Any feedback from the Wildlabs community would be much appreciated. If anyone has an open source, well documented framework for training deep learning models and would like their models to be easily imported into PAMGuard then please get in touch.

Cheers!