Global Feed

There's always something new happening on WILDLABS. Keep up with the latest from across the community through the Global view, or toggle to My Feed to see curated content from groups you've joined. 

Header image: Laura Kloepper, Ph.D.

discussion

Indigenous communities and AI for Conservation

Hello! I am looking for recommendations for people from indigenous communities who are either using AI or exploring the potential of AI to solve conservation problems...

4 1

Thank you for this advice!

If you need a speaker for Variety hour, I would be happy to talk about the work we are doing in the Conservation Evidence Group to use LLMs for finding and reviewing evidence of conservation actions. 

See full post
discussion

Introduction to CT TextbookΒ 

So this is an idea that I've had for a while, and I have some bandwidth for it now. I want to make a purely online (free for all users) conservation technology textbook (...

5 3

Hi Andew,

Whatever became of your book? Also have you seen jupyterbook.org and mystmd.org? Both are free and open source software for publishing articles and books.

Best,

Vance

See full post
discussion

Using drones and camtraps to find sloths in the canopy

Recently, I started volunteering for Sloth Conservation Foundation and learned that it is extremely difficult to find sloths in the canopy  because: 1) they hardly move,...

27 4

Yes,  if the canopy is sparse enough, you can see through the canopy with TIR what you cannot see in the RGB. We had tested with large mammals like rhinos and elephants that we could not see at all with the RGB under a semi-sparse canopy but were very clearly visible in TIR. It was actually quite surprising how easily we could detect the mammals under the canopy. It's likely similar for mid-sized mammals that live in the canopy that those drier seasons will be much easier to detect, although we did not test small mammals for visibility through the seasons. Other research has and there are a number of studies on primates now. 

I did quite a bit of flying above the canopy, and did not have many problems. It's just a matter of always flying bit higher than the canopy. There are built in crash avoidance mechanisms in the drones themselves for safety so they do not crash, although they do get confused with a very brancy understory. They often miss smaller branches.If you look in the specifications of the particular UAV you will see they do not perform well with certain understories, so there is a chance of crashing. The same with telephone wires or other infrastructure that you have to be careful about. 

Also, it's good practice to always be able to see the drone, line-of-sight, which is actually a requirement for flight operations in many countries. Although you may be able to get around it by being in a tower or being in an open area. 

 Some studies have used AI classifiers and interesting frameworks to discuss full or partial detections, sometimes it is unknown if it is the animal of interest. I would carefully plan any fieldwork around the seasons and make sure to get any of your paperwork approved well before the months of the dry season. It's going to be your best chance to detect them. 

See full post
discussion

ChatGPT for conservation

Hi, I've been wondering what this community's thoughts are on ChatGPT? I was just having a play with it and asked:"could you write me a script in python that loads photos and...

47 11

In my experience, ChatGPT-4 performs significantly better than version 3.5, especially in terms of contextual understanding. However, like any AI model, inaccuracies cannot be completely eliminated. I've also seen a video showing that Gemini appears to excel at literature reviews, though I haven't personally tested it yet. Here's the link to the video: https://www.youtube.com/watch?v=sPiOP_CB54A.

While GPT3.5 is good for some activities, GPT-4 and GPT4-turbo are much better. Anthropic Claude is also very good, on a par with GPT4 for many tasks. As someone else has mentioned, the key is in the prompt you use, though chatGPT is continually being extended to allow more contextual information to be included, for example external files that have been uploaded previously. Code execution and image generation are also possible with the paid version of chatGPT, and the latest models include data up to the end of 2023 (I think). You can also include calls to openAI or other APIs programatically to include these in your workflows for assisting with a variety of tasks.
Regarding end results - as always, we're responsible for whatever outputs are ultimately published/shared etc. 
For Conservation Evidence - you could try making your own GPT (chatGPT assistant) that can be published/shared using your own evidence base and prompt that should be well grounded and provide good responses (I should think). But don't use 3.5 for that, IMO.

Undoubted things will quickly evolve from just "straight" ChatGPTn, BARD, ClaudeAI, etc "standard" models, to more specialized Retrieval Augmentation Generation (RAG) , where facts from authoritative sources and rules are supplied as context for the LLM to summarize in its response. You can direct ChatGPT and BARD: "Your response must be based on the reference sections provided" up to a few K of tokens.  A huge amount of work is going into properly indexing reference materials in order to supply context to the reference models.  Folks like FAO and CGIAR are indexing all their agricultural knowledge to feed the standard ones with location, crop, livestock, etc specialty "knowledge" to provide farmers automated advice via mobile phones, etc. I can totally see the same for such mundane things as "how do I ... using ArcMAP or QGIS?" purely based on the vast amount of documentation and tutorials. Google, ChatGPT, etc do a really good job already; this is just totally focusing its response to the body of knowledge known in advance to be relevant.

I would highly recommend folks do some searching on  "LLM RAG" - that's what going nuts now across the board.

Then there's stuff  I like to call "un-SQL" ... unstructured query language .. that will take free-form queries to form SQL queries, with supporting visualization code.

see: 
"https://mlnotes.substack.com/p/no-more-text2sql-its-now-rag2sql"
"http://censusgpt.com" 

etc.

As far as writing and evaluating proposals, I saw a paper on how summarization of public review forms are being developed in several cities. 
see: "http://streetleveladvisors.com/?p=181562"


And that's just the standard LLMs; super-specialized LLMs based on Facebook Llama are being built purely based on domain-specific bodies of dialog - medical, etc.  LOTS of Phds to be done.

I think what will be critical in all this are strong audit trails and certification mechanisms to gain trust. Especially when it comes to deceptive simple terms like "best" 

Chris

See full post
discussion

Advice on a Master's project

Hi all, I’m posting here to ask for a some advice. Sorry in advance for the long post. I’m currently studying for an integrated masters in Electrical and...

24 0

Yes. The key output for synchronisation is the pulse per second (PPS) output which is synchronised very accurately to UTC. The TX from the GPS module is then useful for reading the time and positions. You generally don't need to be able to send commands to the module as most of the time the default settings are fine.

Hi Harry (and all)

Just wanted to share some potentially relevant papers that I've come across, in case you haven't found them already. Coming more from the ecology/conservation focused side of conservation tech, but potentially of use to see what's actually been deployed out there! 

Yip, D. A., Knight, E. C., Haave‐Audet, E., Wilson, S. J., Charchuk, C., Scott, C. D., ... & Bayne, E. M. (2020). Sound level measurements from audio recordings provide objective distance estimates for distance sampling wildlife populations. Remote Sensing in Ecology and Conservation, 6(3), 301-315. https://zslpublications.onlinelibrary.wiley.com/doi/pdfdirect/10.1002/rse2.118.

Abadi, S. H., Wacker, D. W., Newton, J. G., & Flett, D. (2019). Acoustic localization of crows in pre-roost aggregations. The Journal of the Acoustical Society of America, 146(6), 4664-4671. https://asa.scitation.org/doi/full/10.1121/1.5138133.

Spillmann, B., van Noordwijk, M. A., Willems, E. P., Mitra Setia, T., Wipfli, U., & van Schaik, C. P. (2015). Validation of an acoustic location system to monitor Bornean orangutan (Pongo pygmaeus wurmbii) long calls. American Journal of Primatology, 77(7), 767-776. https://doi.org/10.1002/ajp.22398.

Kershenbaum, A., Owens, J. L., & Waller, S. (2019). Tracking cryptic animals using acoustic multilateration: A system for long-range wolf detection. The Journal of the Acoustical Society of America, 145(3), 1619-1628. https://doi.org/10.1121/1.5092973.

Stinco, P., Tesei, A., Dreo, R., & Micheli, M. (2021). Detection of envelope modulation and direction of arrival estimation of multiple noise sources with an acoustic vector sensor. The Journal of the Acoustical Society of America, 149(3), 1596-1608. https://doi.org/10.1121/10.0003628. 

Rhinehart, T. A., Chronister, L. M., Devlin, T., & Kitzes, J. (2020). Acoustic localization of terrestrial wildlife: Current practices and future opportunities. Ecology and Evolution, 10(13), 6794-6818. https://onlinelibrary.wiley.com/doi/pdf/10.1002/ece3.6216.

Hello!

Long time, no update. @StephODonnell suggested I post here with my thesis and some reflections.  

---------------------------------------------------------

TL;DR 

My thesis looked into the effects of environmental parameters like wind, temperature, and vegetation on acoustic classification and localisation of terrestrial wildlife, aiming to shed light on the implications for study design.

---------------------------------------------------------

Summary

My thesis centred on improving acoustic data acquisition via an analysis of the physics of sound propagation. The idea driving this was that there isn't enough attention paid to environmental effects on sound. The hope was that this could be used to improve the design of acoustic monitoring systems. COVID shifted the direction away from any practical work, but thankfully we managed to find our way through it by using data within the literature. 

The thesis is split into two main sections:

  • Improving SNR for sound classification

I explored environmental factors affecting SNR and their implications for the detection space of a signal. 

I've briefly had a look through updates in the field since my thesis, and there is a great paper here. This paper takes a similar approach but does so far more elegantly and completely - definitely worth exploring!

  • Error Analysis in Sound Localisation

I explored how differing environmental conditions from those assumed can influence the TDOA error on a microphone pair, and thus position error. The main parameters looked at were temperature, humidity, wind speed & direction and 2D model error.

Conclusion

I ended with some recommendations for system design such as adding additional sensors for more intelligent monitoring systems, or how to maximise the study area by maximising your SNR.  

I also discussed future work. The dream output would have been taking the analysis in the thesis and creating an online tool to be used to optimise sensor placement. Practitioners could use it to quickly input their study features to determine the likely important parameters for their deployment location, and how they can improve data quality. This would involve taking the analysis in the thesis and packaging it into an app - I'm thinking R Shiny or similar.

---------------------------------------------------------

Thoughts

In the end, I felt that it took a long time to figure out a direction for the project , and how to actively contribute to the space. It was (obviously) difficult as I didn't come into this with prior knowledge or a structured plan, and so I was a little disappointed with the outcome. It would have been great to explore some of the things I put down as "future work" - but I guess that's part of the process.

The project was a great intersection of technology and environment and it definitely helped shape the next few years for me. Since finishing I have taken a couple of detours into the workforce. First to a marine robotics company, and then measuring forest carbon with LiDAR. I've now just started a PhD using ocean modelling to map biodiversity in the ocean with an AUV.  So despite the challenges in trying to design a project within my interests, it has been pretty foundational for me going forward!

Thanks to everyone that offered help and advice. Likewise, I'm very happy to answer any questions from other students/anyone, and  I'm really looking forward to being back in the wildlife tech space!

Harry

Thesis available here.

See full post
discussion

Affordable acoustic monitors for "whispering" bats?

Hi everyone,New here and new to bat acoustic monitoring. I'll be conducting a study where I'd like to acoustically monitor bats, including "whispering" (relatively quiet) bats...

6 1
See full post
discussion

AI & Gamified Citizen Science

Hi everyone. I have been developing an idea for a gamified citizen science platform. It will leverage machine learning, gamified principles, GIS and collective citizen science to...

2 0

Check out FathomVerse, a new game by MBARI folks for involving citizen scientists in improving algorithms to ID deep sea critters!

See full post
discussion

Acoustically Transparent Epoxy

Hello all,I'm developing an animal-borne passive acoustic monitoring system and plan to pot the internal electronics in the housing with epoxy to waterproof the system. We're...

6 1

Same issues here. A MEMS is a great idea to pot, but you really need a piezoelectric element for this to work and not a MEMS based on capacitance (btw they're all capacitance, except for one now discontinued...). It was originally made by Vesper, but the company was bought out last year and the MEMS is no longer made. 

This is because you're no longer really doing a typical microphone, this would be a contact type hydrophone. For waterproofing, you can actually get a waterproof MEMS. As long as your not submerging this for an extended period, it should do the job. Be sure to keep the cable short between the PCB and the mic as you'll get noise as I've experienced. 

For generally answering your question on the "best" epoxy to with sound transparency, in general the harder the material the lower the acoustic impedance. I use Epotec 301 resin with a hardness of 85. Your shape will also influence the resonance frequencies, meaning the flat frequency response will now be distorted and you'll probably have distorted audio. . 

You generally don't want to pot MEMS microphones since they're designed to pick up on air pressure changes and adding any material in front of the microphone just introduces another transition layer where pressure waves need to propagate through. Also, potting the MEMS microphone can be tricky since if you get any material in the port, you could damage the microphone or drastically reduce its performance. If you want to seal something with epoxy, take a look at contact microphones. Higher frequencies will be attenuated but depending on the application, it could work. 

There are companies, however, that design fabrics that are waterproof/resistant but have a relatively low acoustic impedance. SAATI has a variety of samples that you can request and GORE makes Acoustic Vents that could work. You can design a mechanical housing around your MEMS microphone with small perforations that are covered by one of these materials. I did this for one of my latest projects and it holds up just fine in heavy rain conditions. 

Hi Jesse,

For a material to be acoustically transparent (in air), the speed of sound in the material times its density must match that of air.  Realistically, any solid material will have a greater density than air, and a higher speed of sound to boot, so I'm afraid there's no way to match it to air.  Sorry.

See full post
discussion

CollarID: multimodal wearable sensor system for wild and domesticated dogs

Hi Everyone! I (and my team) are new to the WildLabs network so we'd like to post an early-stage project we've been working on to get some feedback!  SummaryThe...

2 5

Hi Patrick, 

This is so cool, thanks for sharing! It's also a perfect example of what we were hoping to capture in the R&D section of the inventory - I've created a new entry for #CollarID so it's discoverable and so we can track how it evolves across any mentions in different posts/discussions that come up on WILDLABS. This thread appears on the listing, and I'll make you three the contacts for it too. But please do go in and update any of the info there as well! 

Steph

See full post
discussion

Drop-deployed HydroMoth

Hi all, I'm looking to deploy a HydroMoth, on a drop-deployed frame, from a stationary USV, alongside a suite of marine chemical sensors, to add biodiversity collection to our...

4 1

Hi Matthew,

Thanks for your advice, this is really helpful!

I'm planning to use it in a seagrass meadow survey for a series of ~20 drops/sites to around 30 m, recording for around 10 minutes each time, in Cornwall, UK.

At this stage I reckon we won't exceed 30 m, but based on your advice, I think this sounds like not the best setup for the surveys we want to try.

We will try the Aquarian H1a, attached to the Zoom H1e unit, through a PVC case. This is what Aquarian recommended to me when I contacted them too.

Thanks for the advice, to be honest the software component is what I was most interested in when it came to the AudioMoth- is there any other open source software you would recommend for this?

Best wishes,

Sol
 

Hey Sol, 

No problem at all. Depending on your configuration, the Audiomoth software would have to work on a PCB with an ESP32 chip which is the unit on the audiomoth/hydromoth, so you would have to make a PCB centered around this chip. You could mimic the functionality of the audiomoth software on another chip, like on a raspberry pi with python's pyaudio library for example. The problem you would have is that the H1A requires phantom power, so it's not plug and play. I'm not too aware with the H1e, but maybe you can control the microphone through the recorder that is programmable through activations by the RPi (not that this is the most efficient MCU for this application, but it is user friendly). A simpler solution might be to just record continuously and play a sound or take notes of when your 10 min deployment starts. I think it should last you >6 hours with a set of lithium energizer batteries. You may want to think about putting a penetrator on the PVC housing for a push button or switch to start when you deploy. They make a few waterproof options. 

Just somethign else that occured to me, but if you're dropping these systems, you'll want to ensure that the system isn't wobbling in the seagrass as that will probably be all you will hear on the recordings, especially if you plan to deploy shallower. For my studies in Curacao, we aim to be 5lbs negative, but this all depends on your current and surface action. You might also want to think about the time of day you're recording biodiversity in general. I may suggest recording the site for a bit (a couple days or a week) prior to your study to see what you should account for (e.g. tide flow/current/anthropogenic disturbance) and determine diel patterning of vocalizations you are aiming to collect if subsampling at 10 minutes. 

Cheers, 

Matt

Hi Sol,

If the maximum depth is 30m, it would be worth experimenting with HydroMoth in this application especially if the deployment time is short. As Matt says, the air-filed case means it is not possible to accurately calibrate the signal strength due to the directionality of the response. For some applications, this doesn't matter. For others, it may.

Another option for longer/deeper deployments would be an Aquarian H2D hydrophone which will plug directly into AudioMoth Dev or AudioMoth 1.2 (with the 3.5mm jack added). You can then use any appropriately sized battery pack.

If you also connect a magnetic switch, as per the GPS board, you can stop and start recording from outside the housing with the standard firmware.

Alex

See full post
discussion

Your HydroMoth experience!

Hi everyone,we just got our first dedicated #hydromoth in the post box. Anyone else about to start their bioacoustic journey? I would love to share our experiences, settings and...

7 1

Vinegar is also a great solution! Let it sit overnight and then just scrub it off. As a warning if you don't clean it, your sensitivity does decrease. You might actually see this if you keep it out there for a month that the amplitude of your calls decrease over the month/you might detect fewer calls. 

Hey! I would recommend a few things:

1) set up at least two in the same site kind of back to back or side to side if you have that many. Directionality can influence the number of calls you get and it's just good to know your error rate. 

2) Experiment with breaks and recording duration. You wont collect anything if the write time is not long enough to record to your SD card and you'll get empty files. 

3) Clean your device every time you take it out or see visible biofouling. Also, add silicon grease every time to your O-ring. Take it out with an O-ring pick and clean the plastic seal, looking for any type of sand/mud/debris. We've had a few flooding incidences, but this is probably because we open them all the time.  

4) lower the frequency rate the more data you can collect, so keep it as low as your frequency of interest without clipping your calls. Fish are lower than pretty much everything (2kHz-3kHz). 

I hope this helps! 

See full post
discussion

AI-enabled image query system

Online citizen science platforms like iNaturalist and Macaulay Library contain a wealth of images but are hard to search using text. We are looking for ideas so we can develop the...

2
See full post
discussion

WILDLABS downtime and performance issues due to AI bot attack

Hi everyone,Some of you will have noticed that WILDLABS was inaccessible or frustratingly slow on Friday (April 26th, 2024). Aside from explaining this downtime, what happened is...

1 10

I noticed the site being annoyingly slow some time last week. Thank you for clearing that up, for finding the cause and solving the issue.

I'm not claiming deep knowledge on AI, but as a member this community, I'd be happy to give you my insights.

For starters: I am not categorically against bots scraping 'my' content, whether for AI training purposes, search engines, or other purposes. In principle, I find it a good thing that this forum is open to non-member users, and to me that extends to non-member use. Obviously, there are some exceptions here. For example when locations of individuals of endangered species are discussed, that should be behind closed doors.

Continuing down this line of reasoning, apparently it matters to me how 'my' content is being used. So, if someone wants to make an AI to aid nature conservation, I say, let them have it. There is the practical side of scraping activities that may be blocking or hindering the site, but there may be practical solutions for this. I don't know, say, have special opening hours for such things, or have the site engine prepare the data and make it available as a data dump somewhere.

Since purpose matters, organizations or individuals wanting to scrape the site should be vetted and given access or not. This is far more easily said than done. However, every step in the direction would be worth the while, because most technology publicly discussed here has good use for nature conservation, but equally bad use for nature destruction. For example, it's good to acoustically monitor bird sounds to monitor species, but also comes in handy when you are in the exotic bird trafficking business.

One could argue that since we allow public access, we should not care either about why bots are scraping the site. I would not go that far. After all, individual people browsing the site with nefarious purposes in mind is something else than a bot systematically scraping the entire site (or sections thereof) for bad purposes. It's a matter of scale.

 

 

See full post
discussion

Hydromoth settings

Hi Everyone,what is your #HydroMoth setup for freshwater ecoacoustic monitoring? What are your settings for underwater recording with your AudioMoth? I would love to dicuss...

8 0

Hi Ian,

I have hours of an unidentified creature recorded during overnight recording sessions with mutliple hydrophones. We think it is platypus but there is nothing to compare against that isn't from captive sounds. I am waiting on the Hydromoth to become available again so I can do longer term monitoring.

Hi everyone, I just got my first hydromoth and wanted to test it for aquatic soundscape with interest in Tomistoma, Otter, boat traffic and maybe fishes too!  But before that I maybe test it on zoos.

What are your advices, tips, or suggestion for first-time user? thank you!

You won't get any audio if you don't allow enough time for the hydromoth/audiomoth to write. So when you do a continuous recording you need to experiment a little. I'm sure there is a formula to calculate this, but I haven't figure that out. I typically do 5 min recordings with 10 seconds of write/break time. I think this system is expecting you to subsample, so keep that in mind instead of a continuous recording. 

I do 8kHz sampling and get about 7 days of data and then the voltage gets too low and you start getting SD card write errors and missing files. 

In terms of analysis, I've had trouble understanding the directionality of the hydromoth and incorporating this into my studies. I always set up two at the same site to check the variability in my call detections and include this into my error analysis. 

See full post
discussion

WILDLABS AWARDS 2024 - Underwater Passive Acoustic Monitoring (UPAM) for threatened Andean water frogs

In our project awarded with the "2024 WILDLABS Awards", we will develop the first Underwater Passive Acoustic Monitoring (UPAM) program to assess the conservation status and for...

5 15

This is so cool @Mauricio_Akmentins - congrats and look forward to seeing your project evolve!

Congratulations! My first hydromoth was just arrived yesterday and so excited! Looking forward for the update from your project!!!

See full post
article

Introducing The Inventory!

The Inventory is your one-stop shop for conservation technology tools, organisations, and R&D projects. Start contributing to it now!

5 14
This is fantastic, congrats to the WildLabs team! Look forward to diving in.
Hi @JakeBurton,thanks for your great work on the Inventory!Would it be possible to see or filter new entries or reviews?Greetings from Austrian forest,Robin 
See full post
careers

Hiring Chief Engineer at Conservation X Labs

Technology to End the Sixth Mass Extinction. Salary: $132 - $160k; Location: Seattle WA; 7+ years of experience in hardware product development and manufacturing; View post for full job description 

2
See full post
discussion

Attaching a directional microphone to a Wildlife Acoustics ultrasonic recorder?

Background: I am still new to acoustics research and I am hoping to get some advice on integrating a directional microphone with an ultrasonic recorder....

3 0

Hi Luke, sounds like an interesting project! One thing to note is the ultrasonic Wildlife Acoustics unit you're looking at is already fairly directional. Take a look at the horizontal directionality plot towards the bottom:

You can see that for the relevant frequencies for slow lorises ultrasonic calls (40-60 kHz), there is 25-30 dB difference between 0 and 180 horizontal degrees. It's not perfect, but is close to some directional mics, and if it works well enough for your project it would save a lot of time and testing!

If you do choose to integrate an external directional microphone, be careful with microphone placement to avoid potential ultrasonic reflections from any hard flat surface like a tree trunk, water surface, or the instrument housing itself. Here's an example of some echo calls from reflective surfaces from bat vocalizations: 

It would be helpful to hear how you plan on obtaining behavioral information (and what kind) to correlate with vocalizations? Observations, cameras, biologgers, etc.? This could inform responses a bit more.

Hi Jesse,

Thank you so much for your reply and for the fantastic knowledge and resources! I was unfamiliar with the plots, so thank you for providing some interpretation- I will have to work to better understanding them. This may change things (I was going off of experience from field work with the last iteration of this WA recorder which had omnidirectional recording) and I may choose to pilot the recorder without an external microphone this summer. 

Regarding my plan for collecting behavioral data, I plan to follow 15 wild individuals in a reserve in Thailand (mostly dry evergreen and dry dipterocarp forest with some human modified areas). I intend to use instantaneous focal sampling to observe lorises in two shifts between 18:00-06:00h. During these focal follows I will record all behaviors at 5-min intervals and use all-occurrences sampling for social and feeding behaviors, using an established slow loris ethogram. Simultaneously, I plan to record vocalizations, with the help of a research assistant and field guide. So we will be carrying the recorder with us during behavioral data collection. I intend to match up the timestamped loris vocalizations with the behavioral data to understand the call's function.

If you have the resources, I would suggest testing the sensitivity and directionality of the system at relevant frequencies both with and without an external mic, and let the results dictate which will be best for your case study.

Another thing to think about since you are manually taking the recordings is if a WA unit is really necessary. You're paying for the technology of a remote system without needing it. Other cheaper handheld recorders (such as Zoom recorders) could free up $$ for a higher quality directional microphone. Although of note is that common Zoom recorders like the H4n only sample up to 96kHz for which the upper frequency limit (48kHz) is getting very close to the frequencies you're likely wanting to measure.

See full post
discussion

AI for wolf ID

We're seeking training data for AI for wolf ID - we at T4C manage 3 Wildbook platforms: Wild North, Whiskerbook and the African Carnivore Wildbook (ACW).  ...

0
See full post