About the Series
Welcome to the third season of WILDLABS Tech Tutors, the series that answers the "how do I do that?" questions of conservation tech! Brought to you with the support of Microsoft AI for Earth, Tech Tutors is made for conservation tech beginners of all knowledge levels (and yes, even experts can still be beginners when it comes to tackling a new aspect of conservation tech or starting a new project.). Our Tech Tutors will give you the bite-sized, easy-to-understand building blocks you'll need to try new conservation technology, enhance your research, DIY a project for the first time, or simply explore the possibilities.
Taking place every Thursday, each Tech Tutor will present a 30 minute tutorial guiding you through an aspect of conservation tech, followed by a 30 minute live Q&A session with the audience.
What do you gain as a Tech Tutors participant? You'll leave each episode with the confidence to build on the skills discussed in these tutorials, and you'll have an ongoing opportunity to learn and collaborate with other members of the WILDLABS community! The connections made through the past seasons of Tech Tutors have led to real projects and results, and our third season is set to introduce you to even more new ideas and community members who are ready to start something new!
Can't make it to an episode this season? Don't worry! You can find every tutorial after it airs on our Youtube channel, and you can collaborate and ask questions in each episode's thread on the WILDLABS Tech Tutors forum.
Want to catch up on Tech Tutors Seasons One and Two? Find links to our episodes' recordings and resources here and here.
Meet Your Tutor: Siyu Yang, Microsoft
Siyu is a data scientist on the AI for Earth initiative at Microsoft. She works on applying computer vision techniques to environmentally important data sources and developing open-source tools to help conservation agencies accelerate their workflows.
Major projects include updating and operationalizing the MegaDetector, an animal detection model that generalizes well to a variety of ecosystems, and a project in collaboration with Wildlife Conservation Society Colombia to map land cover change in the Orinoquía region using satellite images.
Previously at Microsoft, she worked on automatic code completion models for Visual Studio to aid developer productivity. Find Siyu on Twitter here.
We asked Siyu Yang...
What will I learn in this episode?
Many in the camera trap community have tried or heard about the MegaDetector, a computer vision model that detects animals in camera trap images. In this talk, we will take a behind-the-scene look at how the MegaDetector works and what training data is used to develop the model – so that you know when you can make use of it, and importantly, when to proceed with caution!
We will also discuss various ways to apply the MegaDetector, including processing small batches on your laptop, using it as a part of Zooniverse or Camelot, and batch processing millions of images using an API we provide. We will draw from the success of a number of conservation organizations around the world, and look at how manual labeling and machine learning-generated labels can work together to save overall labeling time, including loading detection results in the Timelapse software for verification and fine-grained labeling.
How can I learn more about this subject?
Head over to our project home page on GitHub (https://github.com/microsoft/CameraTraps).
You will also want to check out a previous Tech Tutors episode by Sara Beery, the original author of the MegaDetector, on "How do I get started using Machine Learning for my camera traps?"
For a survey on the topic of machine learning for camera traps, see Dan Morris' Camera Trap ML Survey.
If I want to take the next step, where should I start?
Here is the documentation to help you get started using the MegaDetector.
What advice do you have for a complete beginner in this subject?
If you are thinking about incorporating machine learning models into your data processing workflow, start by thinking about the type of information and level of accuracy required for performing downstream analysis (what percentage of your images are empty, are misses tolerable, do you need the count of animals, do all species need to be labeled).
If there are colleagues working in a similar ecosystem, ask them how much success they have had with a given model, while keeping in mind that locations, camera angles and other factors will influence how well models such as the MegaDetector will perform.
Ready to learn with Siyu Yang? Watch Siyu's full episode here on Youtube.
Shared by Tech Tutors Participants
What were all of you talking about and sharing during Siyu's episode? Check out the following resources shared in the live chat during this episode:
Papers, Articles, and Research:
- Lemur paparazzi: Arboreal camera trapping and occupancy modeling as conservation tools for monitoring threatened lemur species
-
Animal Scanner: Software for classifying humans, animals, and empty frames in camera trap images
Videos and Tutorials:
Tools, Websites, and Resources:
- Megadetector Google colab by Al Stewart
- MegaDetector GUI
- Camera Trap ML Survey resource
-
Zamba Cloud too for camera trap videos
-
As discussed by Dan Morris in the episode chat about Megadetector's potential for video data: "We have some very preliminary tools for breaking videos into frames, running MegaDetector on the frames, and doing some semi-intelligent things to pull those frame-level results together and attach empty/animal/person/vehicle labels to videos. It's not elegant, but it works pretty well and plays very nicely with Timelapse (which supports videos)."
-
-
TopazLab for camera trap videos
-
As discussed by Dave Yoder in the episode chat: "TopazLabs has some pretty impressive AI uprezzing software that also breaks out video into individual frames (without uprezzing if you don't want that). I haven't used it that much but their HD to 4K uprezzing was amazing. Also, if it's simple timelapse without metadata concerns, Davinci Resolve has a free version of its NLE that can be used for timelapse and also color/exposure editing that's very good."
-
-
-
As discussed by Sara Beery in the episode chat: "You can get amazing segmentation results using a combination of MegaDetector and DeepMAC! Check out example code on the iWildCam 2021 kaggle page."
-
Learn more about our upcoming Tech Tutorials
Visit the series page on WILDLABS to find the full list of WILDLABS Tech Tutors.
Add the first post in this thread.