WILDLABS Virtual Meetup Recording: Acoustic Monitoring

The fourth and final event in Season 3 of the WILDLABS Virtual Meetup Series is now available to watch, along with notes that highlight key takeaways from the talks and discussion. In the meetup, speakers David WatsonAndrew HillRuby Lee, and Dimitri Ponirakis shared their work in short presentations followed by lively open discussion and community exchange.

Date published: 2020/02/05

Overview

The WILDLABS Virtual Meetup Series is a program of webinars for community members and wider partners to discuss emerging topics in conservation technology and leverage existing community groups for virtual exchange. The aim of the series is to bring leading engineers in the tech sector together with academics and conservation practitioners to share information, identify obstacles, and discuss how to best move forward.

Season One of the series took place in late 2018, covering new data collection techniques through Networked Sensors for Security and Human-Wildlife Conflict (HWC) Prevention and Next-Generation Wildlife Tracking, and effective utilization of that information through Big Data in ConservationSeason Two ran during the first half of 2019 and focused on Tools and Spaces for Collaboration, the Low-Cost Open-Source Solutions these approaches are producing, and how to put the information they’re generating to use through Creative Approaches to Data-Driven Storytelling.

Season Three is taking place throughout the second half of 2019 and is exploring the theme of noninvasive monitoring technologies in conservation. This season's topics include Camera TrappingDronesEnvironmental DNA (eDNA), and Acoustic Monitoring. After a more approach-driven second season, we’re eager to be diving back into the realm of development and implementation in the context of these ever-evolving tools.

We are always looking to tailor these meetups to community interests and needs, so if you have ideas about specific discussion points you'd like to see covered during this season please join the thread and share your thoughts.

Meetup 4: Acoustic Monitoring

Date & Time 

Tuesday, March 10th, 2020 

9:00pm-10:30pm GMT / 5:00pm-6:30pm EDT

Background & Need

Acoustic sensors enable efficient and non-invasive monitoring of a wide range of species, including many that are difficult to monitor in other ways. Although they were initially limited in application scope largely due to cost and hardware constraints, the development of low-cost, open-source models like the Audiomoth in recent years has increased access immensely and opened up new avenues of research. For example, some teams are using them to identify illicit human activities through the detection of associated sounds, like gunshots, vehicles, or chainsaws (e.g. OpenEars).

With this relatively novel dimension of wildlife monitoring rapidly advancing in both marine and terrestrial systems, it is crucial that we identify and share information about the utility and constraints of these sensors to inform efforts. A recent study identified advancements in hardware and machine learning applications, as well as early development of acoustic biodiversity indicators, as factors facilitating progress in the field. In terms of limitations, the authors highlight insufficient reference sound libraries, a lack of open-source audio processing tools, and a need for standardization of survey and analysis protocols. They also stress the importance of collaboration in moving forward, which is precisely what this meetup will aim to facilitate. 

Outcomes

The aims of this discussion are as follows: to introduce acoustic monitoring in conservation; to describe how these sensors being used, including what needs they are addressing and how they fit into the wider conservation tech ecosystem; to discuss the future of acoustic loggers as a conservation tool, and to identify the obstacles in advancing their capacity, including the role of machine learning.

Agenda

  • Welcome and introductions (5 min)
  • David Watson, Professor in Ecology at Charles Sturt University; Chief Investigator Manager at the Australian Acoustic Observatory (10 min)
  • Andrew Hill,  Electronic Engineer, Open Acoustic Devices & Ruby Lee, Director/Design Engineer, DesignFab (10 min)
  • Dimitri Ponirakis, Senior Noise Analyst & Applications Manager for Cornell University's Bioacoustics Research Program (10 min)
  • Q&A discussion with speakers (20 min)
  • Optional ongoing discussion and community exchange (30 min)
  • Takeaways and wrap up (5 min)

Recording

Acoustic Monitoring Meetup Link to Video Recording

Click through here to watch the full meetup (note: audio transcripts now available with recordings!)

Virtual Meetup Notes

During the final event in season three of our Virtual Meetup Series, more than 88 attendees joined us from at least 13 countries around the world. Thank you to all who came and participated. For those of you who were unable to join live, we’ve recorded the session so that you may view it at your convenience. You can also check out the presentation notes and further reading suggestions below.

Speaker: David Watson

Background

  • Professor of Ecology at Charles Sturt University; one of a group of chief investigators for the Australian Acoustic Observatory (A20), which is working on acoustic monitoring at a continental scale
  • A20 aims to capture acoustic information, store it, and enable others to use it

Visualizing acoustic data

  • Particularly interested in longer duration
  • False-color spectrograms using different acoustic indices helps bring to light interesting patterns in data. Including species-specific activity as shown in this 24-hour visualization:David Watson presentation screenshot
  • Species-specific data is of interest widely—and not just for animals! See the book The Songs of Trees by David Haskell to learn about acoustic species ID for trees
  • Can also visualize other things, like the timing of crickets in the morning, dawn chorus, cicadas in the evening, and jet engines flying overhead

Scaling from hours to months

  • Looking at  24 hours of data enables you to see it in a different way—e.g. it enabled A20 to identify a rare species of bat they didn’t even know occurred in that area
  • With even longer duration data sets spanning 9+ months, you can see even more, like storms and wind in the summer, the dawn chorus shifting from summer into autumn, cicadas peaking in autumn and summer, etc.
    • Each pixel is a minute of data, with the predominant frequency of sound at that time color-coded using false color spectrograms

 Scaling to continent-wide

  • Sensors:
    • Total of 400 sensors deployed across 100 sites, which are stratified by ecoregion
    • Four sensors per site, stratified by productivity with two in wet sites, two in dry
    • Recorded at 22 kHz with 16-bit dynamic range and onboard storage
  • Data: cloud-based, totally open access
  • Hardware:
    • Relatively large device mounted on a staff, including battery, sensor, and space for four SD cards (roughly one terabyte of storage space),
    • Solar panels enable it to record continuously for a year—the limitation is that someone has to come physically swap out SD cards to upload information to the cloud
  • Progress: Currently about ¾ of the way through the roll-out, not too much data in yet but have some from prototypes deployed in 2014
  • Issue: How to find the species you’re looking for amongst so much data?
    • For one bird species, they manually identified the species presence based on visualized data (took 15 hours to go through 12.5 years of data), compared presence over time with remotely sensed data on vegetation and rainfall, and understood that the birds appear after rain events
    • Also looked at calling rate to discover that after rain events calling increases, which is consistent with breeding
    • Totally remotely, they could understand comings and goings of a species in response to a resource, and with a bit of ecological knowledge identified the data of interest in a fairly straightforward manner

The utility of this data

  • Open access, records 24/7
  • Many other uses beyond these species-specific ecological questions
    • E.g. many places in Australia without weather stations – this data could help with tracking storms, electrical activity, seismic activity, etc. (anything that makes sound)
    • Also being deployed to monitor responses to recent fires

Take-home message

  • Sound and listening are powerful ways of communicating
  • Storing sounds from the natural world indefinitely so that future researchers can ask questions we haven’t even thought of
  • This data can empower citizens around the world to directly engage with questions that are relevant to them and to demand more intelligent policy responses from our leaders

Speakers: Andrew Hill & Ruby Lee

Background

  • Andy: hardware engineer at Open Acoustic Devices; Ph.D. work at the University of Southampton focused on expanding biodiversity monitoring coverage with technology (including the development of the AudioMoth)
  • Ruby: electronic engineer focused on human-centered design; directs DesignFab consultancy (including the development of the μMoth)
  • Both aim to design technologies to meet the needs of conservation practitioners

AudioMoth

  • Low cost, open-source acoustic device for monitoring the environment
  • Hardware: consists of a small microphone, a micro SD card, and an ultra-low-power embedded microcontroller (enables programming of detection algorithms)
  • Configuration application: set time, sample rates, recording schedules
  • Versatility: can record very low and very high frequencies
  • Distribution: group purchasing via GroupGets, so far have distributed ~10,000 devices around the world
  • See this talk by Peter Prince for more on how AudioMoth is being used

Inspiration

  • AudioMoth inspired by single board computers like Raspberry Pi and Arduino
  • Low-cost manufacturing and open-source code means anyone can rapidly develop prototype boards and add sensors
  • Have majorly contributed to new conservation tools by reducing development barriers

Issues

  • Field conditions present challenges regarding power and enclosures for protection
  • Single-board computers don’t have enclosures and are power consuming so are hard to deploy for longer periods
  • DIY aspect makes them inaccessible to practitioners without technical training

Solution

  • AudioMoth built off of single board computers but followed principles of user-centered design and collaborative economy to make it practical for conservation
  • User-centered design: aims iterative development of a usable system achieved through the involvement of end-users throughout process
  • Collaborative economy: focused on advantages of fabrication using crowdfunding, deployments using systems science, crowdsourcing analysis, and open-source design  
    • Open source reduces costs of manufacturing but also enables organizations and developers to adapt the design for specific use cases

μMoth

  • μMoth also benefitted from extensive user testing and development work done for AudioMoth
  • Developed by Ruby in partnership with Dr. Robin Freeman at ZSL and Alasdair Davies at the Arribada Initiative
  • Smaller version of AudioMoth designed for animal-borne monitoring (initially focused on avian)
  • The challenge
    • Make it small
      • Restricted by the size of micro SD card, had to settle with 32x24 mm
    • Maintain same functionality as AudioMoth
      • No firmware changes, no increase in power consumption or loss of audio quality
  • Approach
    • Worked closely with researchers at ZSL to understand needs and challenges regarding deployment, maintenance, set up, etc.
      • Identified need for animal-borne monitoring, GPS was an important addition
      • Adaptation makes it possible to use GPS modules and to use lithium-ion batteries
    • Continuing to work with researchers after each deployment to refine the design

Next steps

  • Feedback through a support forum online
  • Constant improvement and development based on feedback, including:
    • Waterproof case with acoustic vent
    • Adding external microphones for the device (the next version will have a 3.5 mm jack connector)Andy & Ruby Presentation Screenshot

Speaker: Dimitri Ponirakis

Background

  • Senior Noise Analyst at the Center for Conservation Bioacoustics, Cornell Lab of Ornithology
  • Interdisciplinary team of 30+ individuals focused on collecting and interpreting sounds in nature with innovative conservation technologies in order to inform conservation on multiple scales
  • Species agnostic, studying multiple taxa in marine and terrestrial environments

Passive Acoustic Monitoring (PAM)

  • Allows you to explore places you may not otherwise be able to (e.g. deep ocean or dense forests) and answer key questions about what’s out there, where it is, when it’s active, etc.
  • Many different types of PAM
    • Stationary vs. mobile
    • Archival vs. real-time
    • Single sensor vs. array
  • Typical data volumes
    • Terrestrial: 48 kHz at 16-bit resolution generates ~8GB of data per channel per day
    • Marin: 200 kHz at 24-bit resolution generates ~50 GB of data per channel per day

Hardware

  • ROCKHOPPER – marine recording unit
    • Can record at depths up to 3,500 meters
    • 24-bit resolution, sampling rates up to 394 kHz
    • 6 months of continuous sampling at 197 kHz and 24-bit resolution
    • FLAC files stored onto two 4 terabyte solid-state drives
  • SWIFT – terrestrial recording unit
    • 16-bit resolution, sampling rates up to 96 kHz
    • 23 days of continuous sampling at 48 kHz and 16-bit resolution
    • wav files stored onto an SD card

Challenges

  • Handling data and extracting information
  • Translating information into conservation actions

Software – analysis and visualization

  • Raven – Java-based system for acquisition, visualization, and analysis of sounds
    • User-friendly
    • Regular workshops and online training materials available
    • What you can do with it: spectrograms, time-series analysis, amplitude measurements, spectral analysis, run detectors, etc.
  • RavenX – Matlab toolbox
    • Rapid prototyping and algorithm development
    • Leverages high-performance computing
    • Can process different scales ranging from seconds to years of data
    • Example: Mapping noise from shipping and vocalizations of endangered right whales approaching Boston Harbor to understand impacts on whale communication
    • Long-term monitoring
      • Similar visualizations of long-term data to David’s info from A20 (see notes above)
  • WhaleNET – using deep learning to detect vocalizations in real-time

    Dimitri presentation screenshot

    • Detections broadcasted back to a base station and posted online so that incoming shops can know when to slow down, reducing the probability of fatalities from ship strikes
  • BirdNET – being developed in collaboration with Google and Chemnitz University of Technology
    • Currently over 1,000 species built into the neural network model (ResNet)
    • Real-time detection with a cabled microphone
    • Shows species being detected and confidence levels of species ID
    • BirdNet App available for Android lets you ID species in real-time and contribute to cataloging bird species presence around the world
    • That data enables researchers to look at conservation questions like movement ecology (e.g. tracking species movements across seasons), biodiversity monitoring (e.g. heat maps of vocalizations), and behavioral and evolutionary biology (using features that the neural nets pick out)
    • Overlapping vocalizations make it hard for neural nets to differentiate species
      • A potential solution is determining directionality by using a surround-sound microphone system
      • Kaggle contest coming soon to tackle this challenge!

Issues, challenges, and opportunities

  • Big data – storage of terabytes to petabytes of data requires high-performance computing (HPC)
  • Vocalizations and sound analysis
    • Need vocalization source levels for determining detection ranges and deployment design (important for population density estimation)
    • Need to develop good propagation modeling software and methodologies for capturing source levels
    • Not enough examples for detector training sets (need to be able to share what we have)
    • Before/after event measurements are extremely useful, so we need long-term monitoring efforts (like in Australia)
  • Standardization
    • Need agreed measurement methods and units and common file naming structures
    • Need to be able to better characterize and calibrate units
  • Sharing (for greater impact)
    • Need common platforms that are findable, accessible, interoperable, and reusable
    • Need to leverage citizen science and crowdsourcing  
    • Share to other conservation platforms like E-Bird, Xeno-Canto, Macaulay Library, and WILDLABS

Extended Discussion and Further Reading

Links referenced in the live chat:

Next Steps

  • Jump over to this thread to continue the conversation
  • If you have ideas about speakers, specific questions or case studies you'd like covered during these meetups, or requests for future meetup topics, we want to hear them. Join us in the series discussion on WILDLABS and help us shape these events so they're useful for you.