Hi everyone! We are spreading the word about a free, open-source tool called Zamba that automatically detects and classifies animals in camera trap videos. If you use camera traps to capture videos, we’d love your feedback! Try your videos with the species we cover or use our training functionality to make yourself a custom model just for your data.
We want to make the tool as useful as possible, and are hoping to gather user feedback. In particular, we’d love to have users test out the Zamba python package. We’ve just released v2 of this package with brand new models and more features!
- Zamba uses artificial intelligence and computer vision to perform intensive camera trap video processing work, freeing up more time for humans to focus on interpreting the content and using the results.
- Zamba can be accessed for free by anyone through an easy command line interface or as a Python package - the code is all open-source on Github!
- Pretrained models are available to predict 42 different species common to western Europe and central Africa, plus blank versus non-blank.
- Zamba can be adapted to any set of species or ecosystem. Users can easily use their own labeled videos to generate a retrained model specific to their use case.
- Zamba is trained on over 27,000 hand-labeled camera trap videos.
A couple ways you can contribute:
- Flag any bugs you find while using the Zamba python package - or submit an issue directly to the Github repo
- Let us know which parts of the package documentation are confusing or could be improved
- Train a new custom model and make it available to others through the Model Zoo Wiki
You can send us any feedback or thoughts by commenting on this post, filling an issue on the GitHub repository, or by emailing [email protected] Close collaboration with subject area expects has been critical to the development of Zamba, and we look forward to hearing your perspectives!