Event / 

WILDLABS Virtual Meetup Recording: Camera Trapping

Our first event in Season Three of the WILDLABS Virtual Meetup Series is now available to watch, along with notes that highlight key takeaways from the talks and discussion. In the meetup, community members Roland Kays, Sam Seccombe, and Sara Beery shared their work in short presentations followed by lively open discussion and community exchange.

Online Event

Overview

The WILDLABS Virtual Meetup Series is a program of webinars for community members and wider partners to discuss emerging topics in conservation technology and leverage existing community groups for virtual exchange. The aim of the series is to bring leading engineers in the tech sector together with academics and conservation practitioners to share information, identify obstacles, and discuss how to best move forward.

Season One of the series took place in late 2018, covering new data collection techniques through Networked Sensors for Security and Human-Wildlife Conflict (HWC) Prevention and Next-Generation Wildlife Tracking, and effective utilization of that information through Big Data in Conservation. Season Two ran during the first half of 2019 and focused on Tools and Spaces for Collaboration, the Low-Cost Open-Source Solutions these approaches are producing, and how to put the information they’re generating to use through Creative Approaches to Data-Driven Storytelling.

Season Three will take place throughout the second half of 2019, and will explore the theme of noninvasive monitoring technologies in conservation, including Camera Trapping, Environmental DNA (eDNA), and Drones. After a more approach-driven second season, we’re eager to dive back into the realm of development and implementation in the context of these ever-evolving tools.

We are always looking to tailor these meetups to community interests and needs, so if you have ideas about specific discussion points you'd like to see covered during this season please join the thread and share your thoughts.

Meetup 1: Camera Trapping

Date & Time

Tuesday, October 29th, 2019

1:00pm-2:30pm GMT / 9:00am-10:30am EDT

Background & Need

Camera traps have been a key part of the conservation toolkit for decades. Remotely triggered video or still cameras allow researchers and managers to monitor cryptic species, survey populations, and support enforcement responses by documenting illegal activities. Increasingly, machine learning is being implemented to automate the processing of data generated by camera traps.

A study published earlier this year showed that, despite being well-established and widely used tools in conservation, progress in the development of camera traps has plateaued since the emergence of the modern model in the mid-2000s, leaving users struggling with many of the same issues they faced a decade ago. That manufacturer ratings have not improved over time, despite technological advancements, demonstrates the need for a new generation of innovative conservation camera traps. This meetup will address existing efforts, established needs, and what a next-generation camera trap might look like - including the integration of AI for data processing through initiatives like Wildlife Insights and Wild Me.

Outcomes

The aims of this discussion are as follows: to introduce modern camera traps for conservation; to describe how they are being used, including what needs they are addressing and how they fit in to the wider conservation tech ecosystem; to identify the obstacles in advancing the capacity of these cameras; and to discuss the future of this tech solution - particularly what’s needed to launch the next generation of conservation camera traps with clear user feedback and exciting, collaborative AI-based data processing developments in mind.

Recording

Camera Traps Meetup Link to Video Recording

Click through here to watch the full meetup (note: audio transcripts now available with recordings!)

Virtual Meetup Notes

This meetup was a great kick-off to the third season of our Virtual Meetup Series, with more than 140 attendees joining us from at least 35 countries around the world – our largest event yet! And this was only a portion of the nearly 300 who expressed interest by registering to attend. Thank you to all who came and participated. For those of you who were unable to join live, we’ve recorded the session that you may view it at your convenience. You can also check out the presentation notes and further reading suggestions below.

Speaker: Roland Kays 

Metrics & Study Design

  • ​​ Occupancy: Percentage of sites where a species is detected
    • Camera trap occupancy is different from the traditional sense in that it’s really “use,” since assumption of closure is violated as animals move in and out of sample area
    • Advantage: Occupancy models quantify how spatial covariates like habitat affect presence/absence of species
    • Disadvantage: lose information beyond presence/absence (range of 0-1), so more challenging for capturing very high or low occupancy
  • Density: Number of animals/area
    • Primary method is capture recapture, but limited to species where you can easily differentiate individuals like tigers or zebras
    • Other methods in development like distance sampling, but they all require more information and effort
  • Abundance: measuring detection rate as relative abundance (e.g. # of deer/day)
    • Requirements: no bait, representative site placement
    • Small scale variation reflects habitat preference, but averaged over many cameras can reflect abundance
    • Use detection models to quantify how spatial covariates affect presence/absence of species (range: 0-infinity)
    • Also reflects movement rates
  • How can we do better?
    • Need data on detection area – where are these animals when they’re triggering the camera?
    • Other factors affect detection, like animal size, weather, habitat, camera model, etc.
    • Would enable density estimates via distance sampling
    • Modern tech can already do this in other settings (e.g. images from an Intel Realsense camera code depth by color) but needs to be applied to camera traps
  • Study Design Recommendations
    • Findings from a paper in review in Methods in Ecology and Evolution assessing 41 camera datasets to determine how well they did and how many cameras were needed:
      • Need 40-60 camera traps run for 3-5 weeks
      • If targeting rare species may need more camera locations and consider target-specific attractants or adaptive study designs
      • If comparing detection rates, need to be model-based and include local covariates to help explain small-scale variation
      • Seasonality is important! If comparing across study areas or over time it’s important to account for in both tropical and temperate sites

Scaling Up

  • Camera trap papers are increasing, and people are running larger camera grids
  • Candid Critters: Citizen Science in North Carolina
    • Citizen scientists could check out cameras from local libraries, resulted in 4000 locations sampled over 3 years across all 100 counties
    • Generated 1.8 million pictures, 106K animal detections, 39 mammal species
    • eMammal used for review and quality control
    • More detections than iNaturalist or museum collections
  • Snapshot USA: Scientists across the US in every habitat type in Sept-Oct
    • 127 participants
    • All 50 states
    • Pictures coming in now through eMammal
    • Idea is to collaborate and share data, interest in surveys on an annual basis
  • Challenges and opportunities of scaling up
    • Nonstationarity: On a large scale, do species respond to the environment in the same way? E.g. bobcats live across a huge range of habitat types and probably respond differently in different places, but only looking at large scale data can enable us to answer that
    • Data fusion: e.g. combining camera trap captures and hunter observations for improved data on occurrence
    • Large scale camera trapping can be a form of remote sensing for mammals – satellite imagery can’t capture most mammals on the planet, but using cameras on a large scale we can see how wildlife populations are doing and how our conservation efforts are or are not making a difference.

Speaker: Sam Seccombe 

Background

  • ​​​Spent the last 3 years developing Instant Detect 2.0 – a satellite-connected wildlife monitoring and threat detection system.
  • Not a camera trapping expert, but has become deeply familiar with how different camera traps work and how to improve design
  • Also took advantage of input from  colleagues working in the field and  useful resources like WWF’s Camera Trapping Best Practices Guide and the paper published this year called Camera Trapping Version 3.0: Current Constraints and Future Priorities for Development
    • Paper outlines 3 phases of camera trapping:
      • Experimental cameras of the past that used photographic film
      • Well-provisioned commercial camera traps of today
      • Camera traps of the future, which have excellent detection circuitry, resistance to extreme environments, on-board image filtering, wireless data transmission, and tools for automated management and analysis of images

Camera Trap Hardware Insights

  • Comparison testing
    • sam presentation image

    • Bushnell quickest camera, with only fractionally separated first, second, and third images of leopard
    • Spartan next quickest but with dark images; in second image leopard hears camera and then breaks into run in third
    • Reconyx is slowest to detect with more highly spaced images, but excellent image quality
  • PIR detection angle and range
    • Wide PIR detection zone: If the PIR detection zone is wider than the field of view of the camera, there’s a delay between PIR triggering and camera capturing first images
      • When it works: if animal walks in from the side and crosses image frame
      • When it doesn’t: if animals trigger PIR sensor but remain out of field of view you’ll get empty images that slow down analysis
    • Narrow PIR detection zone: A very narrow PIR detection zone centralized in the middle of the field of view would reduce number of empty images and would get animals right in the center of the field of view but would also reduce accuracy by missing any animals not in that spot.
    • Some clever combinations exist combining multiple PIR sensors, but use more power
  • PIR sensitivity and PIR Fresnel lens zoning
    • Basics of PIR sensors as motion detectors:
      • PIR sensors split input into positive and negative
      • Detection of heat source -> voltage spike
      • Simultaneous spikes in positive and negative spike -> no detection
      • Sequential spikes in positive and negative spike -> detection
    • Camera manufacturers set voltage thresholds for trigger and also time between negative and positive triggers to generate an alert; some allow for user to adjust sensitivity based on environment (e.g. higher sensitivity in warmer environment where body temp of animal is more similar to background)
    • Lens zoning
      • Fresnel lens allows PIR sensors to focus by directing incoming radiation into central point
      • Camera traps use an array of these lenses all stuck together to create a range of detection zones
      • Problems: camera trap makers all use different Fresnel lenses, so zone patterns vary between camera models; animals looking directly at camera are often missed
  • Dynamic range and exposure
    • To get high-quality images, you need both a good image sensor that can take low-light images and a good dynamic range
    • Dynamic range is the ability to capture details in dark areas of image, while also capturing information clearly in brighter parts of image – tough balancing act
    • Phone cameras seem to do this well because they superimpose a burst of images on top of one another, but this would create a substantial processing burden for camera traps
  • Image resolution and camera read/write speed
    • Image resolution affects read/write speed of camera, which in turn affects speed of image capture, camera recovery time, and power usage
    • Basics:
      • Recovery time = length of time that camera can’t take images because it’s processing images (first captures images in raw image file format and caches them into temporary RAM memory, then compresses into JPEG, adds metadata and timestamp before writing onto SD card)
      • Some cameras try to do this each trigger and won’t capture new images until RAM is freed up
      • Cameras with more RAM have faster recovery times because they can capture many images and then write them to the SD card when there’s a lull, but this means that when they’re overloaded with images you get a much longer gap in detection during processing
    • Resolution and speed: The more pixels the image sensor has, the more data has to be cached in temporary memory, the longer it takes to write to the SD card
    • Take-aways:
      • 2-3 megapixels is more than enough for most camera trap users and is in fact what most cameras use anyway despite advertising that suggests otherwise
      • Cameras are only as good as the SD cards you put in them – use reputable brands and get highest rewrite speed you can afford (look for a little C with a 10 that indicates class 10, which should meet most camera trap performance needs)

Camera trap 3.0 complexities

  • Image transmission: options are cellular, satellite, or radio for long distances, Bluetooth or WiFi for short
    • Challenges: cellular not always available in bush, satellite expensive, radio has legal restrictions on frequencies and airtime (duty cycle) that vary by region
  • Image selection: Can reduce power usage and transmission costs – compress image data, strip data out of images, process images to select which ones to send, but this all takes power and processing capacity
    • E.g. compressed full-color image first and then only pixels that had changed in the second and third images, making them much smaller, and reassembled those images on the receiving end based off of the first image (relies on proper transmission of the first image)
    • Needs some image recognition algorithm and capacity to run it quickly, accurately, and at low power
  • Power usage
    • Current phase 2 cameras off the shelf are able to run for up to 12 months using 8-12 good, internal AA batteries
    • More batteries not the answer: Adding more internal batteries increases the size and weight of camera, and external packs add new costs of cables, expensive connectors, etc.

Speaker: Sara Beery

Data Challenges

  • Computer vision is very successful for species identification in apps like iNaturalist that capitalize on humans remaining in the loop – by pointing cameras at each object we want to identify, we help the machine learning process hugely

  • But this isn’t the case with camera trap images, which are challenging for a variety of reasons
  • Hard to spot animals:
    • Illumination
    • Blur
    • Size of animal
    • Occlusion
    • Camouflage
    • Perspective
  • Empty images: on average, 70% of the images from each camera are empty (i.e. contain no animal)
  • When training AI models, data diversity is important
    • For iNaturalist ~50 images per species is enough to train relatively well
    • Getting enough images of each species to classify them well using a machine learning model can be challenging, especially for rare species
  • Also need pose variability – images of species at different angles and positions
    • Camera traps are static and animals are habitual and tend to do similar things over time
    • Ideally, a species will be seen at ~50 different cameras to address this

MegaDetector: Microsoft AI for Earth

  • What we wanted: Models that works anywhere in the world to detect presence/absence, rather than having to retrain a model for every new project or set of species
  • Based on this paper that extrapolated failures in camera trap machine learning when applied to new locations found that:
    • Detecting animals was well generalized; worked even on species and locations never previously seen by model
  • MegadDtector is very simple and easy to use, with open source data and models  – if you’re interested in trying it on your camera trap images you can do that here
  • MegaDetector and Google’s Overlay tool: drag and drop images from your project and watch as overlay runs detector on them in real-time with basically no overhead
    • If it works well enough for your purposes, talk to Dan Morris at Microsoft about running a much larger batch of data
  • Use case: worked with Idaho Department of Fish and Game to sort 4.8 million images in ~2.75 days, which would have taken 10 full-time employees 40 weeks to complete

Species Detection – more complicated

  • One universal classification model may not be the right answer for species ID, since a given project only cares about species showing up in their dataset
  • Distillation: if you have labeled images of the species you care about, you can run the mega detector over it to get boxes for all of those images and then pair boxes to class labels you have at the image level to train a project-specific classifier
    • Sara image

    • Will need to pull out images with multiple species in them, but it will learn to detect separately
    • This is very feasible using current tools – creating a classifier on for common species in your area could be done by a competent undergraduate student in a week or two

Recent Research: What to do about rare species?

  • Built synthetic camera trap with idea that: using a game engine and good graphics we could create realistic 3D graphic models of species, and move them around in a synthetic world to generate variability that’s hard to get with real images of rare species
    • Found that it worked!
    • But found a surprisingly easy solution too – that manually cutting and pasting animals onto empty background images also significantly boosted performance

Recent Research: How best to leverage temporal signal?

  • Attention-based approach using spatiotemporal encodings of boxes and then take features across long time horizons, letting model decide how to aggregate information from frames
  • Found that this approach resulted in substantial performance bump across classes on Snapshot Serengeti and Caltech Camera Traps datasets
  • Paper to be published soon, hoping to incorporate this work into Wildlife Insights

Benchmarks and Metrics for Camera Trapping Community

  • Training data
    • Issue: Existing papers on computer vision for camera trap data hard to compare because often data isn’t published or well-curated and methods aren’t compared across studies
    • Need: centralized camera trap dataset on which to train and test machine learning models
    • Solution: To address this Microsoft created a diverse but lightweight database with human-labeled class and bbox data (from subsets of Snapshot Serengeti and Caltech Camera Traps) to use as a benchmark, available on lila.science
      • Also includes prescribed camera location-based training and test splits
  • De-siloing data: machine learning will improve dramatically with standardization of camera trap data format and centralized database, Wildlife Insights is starting to make a reality!

Note: With questions for the Microsoft team working on camera trap data, you can email this address.

Further Reading

Other links referenced in the live chat:

Next Steps

  • Jump over to this thread to continue the conversation or ask follow-up questions from the event.
  • Look out for information regarding our next meetup on Drones in Conservation!
  • Register here to virtually attend this 2-day Camera Trapping Symposium Nov 7-8 hosted at Google HQ in Mountain View, CA.

Add the first post in this thread.

Want to share your own conservation tech experiences and expertise with our growing global community? Login or register to start posting!