Harnessing large language models for coding, teaching and inclusion to empower research in ecology and evolution
9 May 2024 12:51pm
Check out this paper that reviews the current state of AI in conservation.
Indigenous communities and AI for Conservation
8 May 2024 12:32pm
8 May 2024 7:04pm
Oh yeah that would be awesome! Let me email you to follow up. I assume you're working with Alec Christie then? He was sharing your team's work in our chatgpt discussion:
9 May 2024 10:26am
Yes, exactly! Alec and I are working together on this.
Remote Sensing & GIS Group Leadership
8 May 2024 4:25pm
Introduction to CT TextbookΒ
22 November 2022 10:43am
9 December 2022 11:45am
yes this is an excellent point. I love what the WILDLABS group did with the state of conservation tech report. Thanks for the example.
16 January 2023 2:51pm
This would be awesome Andrew! Happy to help with some organization of this/identifying potential chapter authors.
I'm sure you know about this already, but there is also this book from Wich & Piel: https://www.amazon.com/Conservation-Technology-Serge-Wich/dp/0198850255
8 May 2024 4:19pm
Hi Andew,
Whatever became of your book? Also have you seen jupyterbook.org and mystmd.org? Both are free and open source software for publishing articles and books.
Best,
Vance
Survey on European biodiversity monitoring communities
7 May 2024 3:39pm
Biodiversa+ is running a survey to map the European biodiversity monitoring landscape, identify opportunities for collaboration, and strengthen coordination for improved monitoring.
Voices of Sustainability: Perspectives from - Africa Wholesome Sustainability Explained: What is E-PIE
7 May 2024 3:06am
Using drones and camtraps to find sloths in the canopy
18 July 2023 7:39pm
3 May 2024 6:48pm
Thank you for the tip, Eve! In fact, in the area where the foundation works, there clearly are dry seasons, the past few years much drier than normal, where trees loose their leaves a lot.
6 May 2024 4:29pm
Yes, if the canopy is sparse enough, you can see through the canopy with TIR what you cannot see in the RGB. We had tested with large mammals like rhinos and elephants that we could not see at all with the RGB under a semi-sparse canopy but were very clearly visible in TIR. It was actually quite surprising how easily we could detect the mammals under the canopy. It's likely similar for mid-sized mammals that live in the canopy that those drier seasons will be much easier to detect, although we did not test small mammals for visibility through the seasons. Other research has and there are a number of studies on primates now.
I did quite a bit of flying above the canopy, and did not have many problems. It's just a matter of always flying bit higher than the canopy. There are built in crash avoidance mechanisms in the drones themselves for safety so they do not crash, although they do get confused with a very brancy understory. They often miss smaller branches.If you look in the specifications of the particular UAV you will see they do not perform well with certain understories, so there is a chance of crashing. The same with telephone wires or other infrastructure that you have to be careful about.
Also, it's good practice to always be able to see the drone, line-of-sight, which is actually a requirement for flight operations in many countries. Although you may be able to get around it by being in a tower or being in an open area.
Some studies have used AI classifiers and interesting frameworks to discuss full or partial detections, sometimes it is unknown if it is the animal of interest. I would carefully plan any fieldwork around the seasons and make sure to get any of your paperwork approved well before the months of the dry season. It's going to be your best chance to detect them.
7 May 2024 1:49am
Thank you for elaborating, @evebohnett ! And for the heads ups!
ChatGPT for conservation
16 January 2023 10:04am
2 May 2024 9:39pm
In my experience, ChatGPT-4 performs significantly better than version 3.5, especially in terms of contextual understanding. However, like any AI model, inaccuracies cannot be completely eliminated. I've also seen a video showing that Gemini appears to excel at literature reviews, though I haven't personally tested it yet. Here's the link to the video: https://www.youtube.com/watch?v=sPiOP_CB54A.
4 May 2024 6:44am
While GPT3.5 is good for some activities, GPT-4 and GPT4-turbo are much better. Anthropic Claude is also very good, on a par with GPT4 for many tasks. As someone else has mentioned, the key is in the prompt you use, though chatGPT is continually being extended to allow more contextual information to be included, for example external files that have been uploaded previously. Code execution and image generation are also possible with the paid version of chatGPT, and the latest models include data up to the end of 2023 (I think). You can also include calls to openAI or other APIs programatically to include these in your workflows for assisting with a variety of tasks.
Regarding end results - as always, we're responsible for whatever outputs are ultimately published/shared etc.
For Conservation Evidence - you could try making your own GPT (chatGPT assistant) that can be published/shared using your own evidence base and prompt that should be well grounded and provide good responses (I should think). But don't use 3.5 for that, IMO.
4 May 2024 8:28pm
Undoubted things will quickly evolve from just "straight" ChatGPTn, BARD, ClaudeAI, etc "standard" models, to more specialized Retrieval Augmentation Generation (RAG) , where facts from authoritative sources and rules are supplied as context for the LLM to summarize in its response. You can direct ChatGPT and BARD: "Your response must be based on the reference sections provided" up to a few K of tokens. A huge amount of work is going into properly indexing reference materials in order to supply context to the reference models. Folks like FAO and CGIAR are indexing all their agricultural knowledge to feed the standard ones with location, crop, livestock, etc specialty "knowledge" to provide farmers automated advice via mobile phones, etc. I can totally see the same for such mundane things as "how do I ... using ArcMAP or QGIS?" purely based on the vast amount of documentation and tutorials. Google, ChatGPT, etc do a really good job already; this is just totally focusing its response to the body of knowledge known in advance to be relevant.
I would highly recommend folks do some searching on "LLM RAG" - that's what going nuts now across the board.
Then there's stuff I like to call "un-SQL" ... unstructured query language .. that will take free-form queries to form SQL queries, with supporting visualization code.
see:
"https://mlnotes.substack.com/p/no-more-text2sql-its-now-rag2sql"
"http://censusgpt.com"
etc.
As far as writing and evaluating proposals, I saw a paper on how summarization of public review forms are being developed in several cities.
see: "http://streetleveladvisors.com/?p=181562"
And that's just the standard LLMs; super-specialized LLMs based on Facebook Llama are being built purely based on domain-specific bodies of dialog - medical, etc. LOTS of Phds to be done.
I think what will be critical in all this are strong audit trails and certification mechanisms to gain trust. Especially when it comes to deceptive simple terms like "best"
Chris
Advice on a Master's project
4 August 2020 2:07pm
10 March 2021 8:03pm
Yes. The key output for synchronisation is the pulse per second (PPS) output which is synchronised very accurately to UTC. The TX from the GPS module is then useful for reading the time and positions. You generally don't need to be able to send commands to the module as most of the time the default settings are fine.
18 March 2021 5:26pm
Hi Harry (and all)
Just wanted to share some potentially relevant papers that I've come across, in case you haven't found them already. Coming more from the ecology/conservation focused side of conservation tech, but potentially of use to see what's actually been deployed out there!
Yip, D. A., Knight, E. C., HaaveβAudet, E., Wilson, S. J., Charchuk, C., Scott, C. D., ... & Bayne, E. M. (2020). Sound level measurements from audio recordings provide objective distance estimates for distance sampling wildlife populations. Remote Sensing in Ecology and Conservation, 6(3), 301-315. https://zslpublications.onlinelibrary.wiley.com/doi/pdfdirect/10.1002/rse2.118.
Abadi, S. H., Wacker, D. W., Newton, J. G., & Flett, D. (2019). Acoustic localization of crows in pre-roost aggregations. The Journal of the Acoustical Society of America, 146(6), 4664-4671. https://asa.scitation.org/doi/full/10.1121/1.5138133.
Spillmann, B., van Noordwijk, M. A., Willems, E. P., Mitra Setia, T., Wipfli, U., & van Schaik, C. P. (2015). Validation of an acoustic location system to monitor Bornean orangutan (Pongo pygmaeus wurmbii) long calls. American Journal of Primatology, 77(7), 767-776. https://doi.org/10.1002/ajp.22398.
Kershenbaum, A., Owens, J. L., & Waller, S. (2019). Tracking cryptic animals using acoustic multilateration: A system for long-range wolf detection. The Journal of the Acoustical Society of America, 145(3), 1619-1628. https://doi.org/10.1121/1.5092973.
Stinco, P., Tesei, A., Dreo, R., & Micheli, M. (2021). Detection of envelope modulation and direction of arrival estimation of multiple noise sources with an acoustic vector sensor. The Journal of the Acoustical Society of America, 149(3), 1596-1608. https://doi.org/10.1121/10.0003628.
Rhinehart, T. A., Chronister, L. M., Devlin, T., & Kitzes, J. (2020). Acoustic localization of terrestrial wildlife: Current practices and future opportunities. Ecology and Evolution, 10(13), 6794-6818. https://onlinelibrary.wiley.com/doi/pdf/10.1002/ece3.6216.
4 May 2024 3:33pm
Hello!
Long time, no update. @StephODonnell suggested I post here with my thesis and some reflections.
---------------------------------------------------------
TL;DR
My thesis looked into the effects of environmental parameters like wind, temperature, and vegetation on acoustic classification and localisation of terrestrial wildlife, aiming to shed light on the implications for study design.
---------------------------------------------------------
Summary
My thesis centred on improving acoustic data acquisition via an analysis of the physics of sound propagation. The idea driving this was that there isn't enough attention paid to environmental effects on sound. The hope was that this could be used to improve the design of acoustic monitoring systems. COVID shifted the direction away from any practical work, but thankfully we managed to find our way through it by using data within the literature.
The thesis is split into two main sections:
- Improving SNR for sound classification:
I explored environmental factors affecting SNR and their implications for the detection space of a signal.
I've briefly had a look through updates in the field since my thesis, and there is a great paper here. This paper takes a similar approach but does so far more elegantly and completely - definitely worth exploring!
- Error Analysis in Sound Localisation:
I explored how differing environmental conditions from those assumed can influence the TDOA error on a microphone pair, and thus position error. The main parameters looked at were temperature, humidity, wind speed & direction and 2D model error.
Conclusion
I ended with some recommendations for system design such as adding additional sensors for more intelligent monitoring systems, or how to maximise the study area by maximising your SNR.
I also discussed future work. The dream output would have been taking the analysis in the thesis and creating an online tool to be used to optimise sensor placement. Practitioners could use it to quickly input their study features to determine the likely important parameters for their deployment location, and how they can improve data quality. This would involve taking the analysis in the thesis and packaging it into an app - I'm thinking R Shiny or similar.
---------------------------------------------------------
Thoughts
In the end, I felt that it took a long time to figure out a direction for the project , and how to actively contribute to the space. It was (obviously) difficult as I didn't come into this with prior knowledge or a structured plan, and so I was a little disappointed with the outcome. It would have been great to explore some of the things I put down as "future work" - but I guess that's part of the process.
The project was a great intersection of technology and environment and it definitely helped shape the next few years for me. Since finishing I have taken a couple of detours into the workforce. First to a marine robotics company, and then measuring forest carbon with LiDAR. I've now just started a PhD using ocean modelling to map biodiversity in the ocean with an AUV. So despite the challenges in trying to design a project within my interests, it has been pretty foundational for me going forward!
Thanks to everyone that offered help and advice. Likewise, I'm very happy to answer any questions from other students/anyone, and I'm really looking forward to being back in the wildlife tech space!
Harry
Thesis available here.
300 funding and finance opportunities in free database
4 May 2024 3:12pm
Affordable acoustic monitors for "whispering" bats?
30 April 2024 8:31pm
30 April 2024 11:28pm
Also some other bat experts I'd recommend reaching out to, if you haven't already (for this and any bat acoustic questions) - AdriΓ LΓ³pez-Baucells, Nils Bouillard, Kate Jones, Stuart Newson, plus obviously anyone at the big orgs like Bat Conservation Int'l, etc.
3 May 2024 3:28pm
Hey @ccosma if you are interested in multiple cheap sensors, Phil Atkin and I are making initial batches of pippyG bat detector. They can record 4-7 days at the moment, but could be modified to fit needs.
3 May 2024 9:04pm
I think I've landed on the Wildlife Acoustics Song Meter Mini Bat 2 for now, but I'm definitely interested to see how this cheaper tech progresses
AI & Gamified Citizen Science
3 May 2024 7:24am
3 May 2024 5:09pm
Check out FathomVerse, a new game by MBARI folks for involving citizen scientists in improving algorithms to ID deep sea critters!
3 May 2024 8:28pm
This is so cool! I am 1000% going to see if they want to come talk about it at Variety Hou!
Acoustically Transparent Epoxy
26 April 2024 3:26pm
1 May 2024 5:35pm
Same issues here. A MEMS is a great idea to pot, but you really need a piezoelectric element for this to work and not a MEMS based on capacitance (btw they're all capacitance, except for one now discontinued...). It was originally made by Vesper, but the company was bought out last year and the MEMS is no longer made.
This is because you're no longer really doing a typical microphone, this would be a contact type hydrophone. For waterproofing, you can actually get a waterproof MEMS. As long as your not submerging this for an extended period, it should do the job. Be sure to keep the cable short between the PCB and the mic as you'll get noise as I've experienced.
For generally answering your question on the "best" epoxy to with sound transparency, in general the harder the material the lower the acoustic impedance. I use Epotec 301 resin with a hardness of 85. Your shape will also influence the resonance frequencies, meaning the flat frequency response will now be distorted and you'll probably have distorted audio. .
3 May 2024 1:25am
You generally don't want to pot MEMS microphones since they're designed to pick up on air pressure changes and adding any material in front of the microphone just introduces another transition layer where pressure waves need to propagate through. Also, potting the MEMS microphone can be tricky since if you get any material in the port, you could damage the microphone or drastically reduce its performance. If you want to seal something with epoxy, take a look at contact microphones. Higher frequencies will be attenuated but depending on the application, it could work.
There are companies, however, that design fabrics that are waterproof/resistant but have a relatively low acoustic impedance. SAATI has a variety of samples that you can request and GORE makes Acoustic Vents that could work. You can design a mechanical housing around your MEMS microphone with small perforations that are covered by one of these materials. I did this for one of my latest projects and it holds up just fine in heavy rain conditions.
3 May 2024 5:34pm
Hi Jesse,
For a material to be acoustically transparent (in air), the speed of sound in the material times its density must match that of air. Realistically, any solid material will have a greater density than air, and a higher speed of sound to boot, so I'm afraid there's no way to match it to air. Sorry.
Travel grants for insect monitoring an AI
3 May 2024 5:20pm
CollarID: multimodal wearable sensor system for wild and domesticated dogs
3 May 2024 1:42am
3 May 2024 10:14am
Hi Patrick,
This is so cool, thanks for sharing! It's also a perfect example of what we were hoping to capture in the R&D section of the inventory - I've created a new entry for #CollarID so it's discoverable and so we can track how it evolves across any mentions in different posts/discussions that come up on WILDLABS. This thread appears on the listing, and I'll make you three the contacts for it too. But please do go in and update any of the info there as well!
Steph
3 May 2024 2:01pm
Hi Steph,
We appreciate the support! Thanks for the tag and your help managing the community!
Patrick
Drop-deployed HydroMoth
2 April 2024 10:20am
15 April 2024 6:53am
Hi Matthew,
Thanks for your advice, this is really helpful!
I'm planning to use it in a seagrass meadow survey for a series of ~20 drops/sites to around 30 m, recording for around 10 minutes each time, in Cornwall, UK.
At this stage I reckon we won't exceed 30 m, but based on your advice, I think this sounds like not the best setup for the surveys we want to try.
We will try the Aquarian H1a, attached to the Zoom H1e unit, through a PVC case. This is what Aquarian recommended to me when I contacted them too.
Thanks for the advice, to be honest the software component is what I was most interested in when it came to the AudioMoth- is there any other open source software you would recommend for this?
Best wishes,
Sol
21 April 2024 7:10pm
Hey Sol,
No problem at all. Depending on your configuration, the Audiomoth software would have to work on a PCB with an ESP32 chip which is the unit on the audiomoth/hydromoth, so you would have to make a PCB centered around this chip. You could mimic the functionality of the audiomoth software on another chip, like on a raspberry pi with python's pyaudio library for example. The problem you would have is that the H1A requires phantom power, so it's not plug and play. I'm not too aware with the H1e, but maybe you can control the microphone through the recorder that is programmable through activations by the RPi (not that this is the most efficient MCU for this application, but it is user friendly). A simpler solution might be to just record continuously and play a sound or take notes of when your 10 min deployment starts. I think it should last you >6 hours with a set of lithium energizer batteries. You may want to think about putting a penetrator on the PVC housing for a push button or switch to start when you deploy. They make a few waterproof options.
Just somethign else that occured to me, but if you're dropping these systems, you'll want to ensure that the system isn't wobbling in the seagrass as that will probably be all you will hear on the recordings, especially if you plan to deploy shallower. For my studies in Curacao, we aim to be 5lbs negative, but this all depends on your current and surface action. You might also want to think about the time of day you're recording biodiversity in general. I may suggest recording the site for a bit (a couple days or a week) prior to your study to see what you should account for (e.g. tide flow/current/anthropogenic disturbance) and determine diel patterning of vocalizations you are aiming to collect if subsampling at 10 minutes.
Cheers,
Matt
3 May 2024 12:55pm
Hi Sol,
If the maximum depth is 30m, it would be worth experimenting with HydroMoth in this application especially if the deployment time is short. As Matt says, the air-filed case means it is not possible to accurately calibrate the signal strength due to the directionality of the response. For some applications, this doesn't matter. For others, it may.
Another option for longer/deeper deployments would be an Aquarian H2D hydrophone which will plug directly into AudioMoth Dev or AudioMoth 1.2 (with the 3.5mm jack added). You can then use any appropriately sized battery pack.
If you also connect a magnetic switch, as per the GPS board, you can stop and start recording from outside the housing with the standard firmware.
Alex
Your HydroMoth experience!
29 July 2022 1:38pm
1 May 2024 5:45pm
Vinegar is also a great solution! Let it sit overnight and then just scrub it off. As a warning if you don't clean it, your sensitivity does decrease. You might actually see this if you keep it out there for a month that the amplitude of your calls decrease over the month/you might detect fewer calls.
1 May 2024 5:51pm
Hey! I would recommend a few things:
1) set up at least two in the same site kind of back to back or side to side if you have that many. Directionality can influence the number of calls you get and it's just good to know your error rate.
2) Experiment with breaks and recording duration. You wont collect anything if the write time is not long enough to record to your SD card and you'll get empty files.
3) Clean your device every time you take it out or see visible biofouling. Also, add silicon grease every time to your O-ring. Take it out with an O-ring pick and clean the plastic seal, looking for any type of sand/mud/debris. We've had a few flooding incidences, but this is probably because we open them all the time.
4) lower the frequency rate the more data you can collect, so keep it as low as your frequency of interest without clipping your calls. Fish are lower than pretty much everything (2kHz-3kHz).
I hope this helps!
2 May 2024 6:45pm
Oh wow, thank you so much!!!
I will keep that four advices in mind!
AI-enabled image query system
2 May 2024 2:16am
WILDLABS downtime and performance issues due to AI bot attack
29 April 2024 3:35pm
1 May 2024 6:05pm
I noticed the site being annoyingly slow some time last week. Thank you for clearing that up, for finding the cause and solving the issue.
I'm not claiming deep knowledge on AI, but as a member this community, I'd be happy to give you my insights.
For starters: I am not categorically against bots scraping 'my' content, whether for AI training purposes, search engines, or other purposes. In principle, I find it a good thing that this forum is open to non-member users, and to me that extends to non-member use. Obviously, there are some exceptions here. For example when locations of individuals of endangered species are discussed, that should be behind closed doors.
Continuing down this line of reasoning, apparently it matters to me how 'my' content is being used. So, if someone wants to make an AI to aid nature conservation, I say, let them have it. There is the practical side of scraping activities that may be blocking or hindering the site, but there may be practical solutions for this. I don't know, say, have special opening hours for such things, or have the site engine prepare the data and make it available as a data dump somewhere.
Since purpose matters, organizations or individuals wanting to scrape the site should be vetted and given access or not. This is far more easily said than done. However, every step in the direction would be worth the while, because most technology publicly discussed here has good use for nature conservation, but equally bad use for nature destruction. For example, it's good to acoustically monitor bird sounds to monitor species, but also comes in handy when you are in the exotic bird trafficking business.
One could argue that since we allow public access, we should not care either about why bots are scraping the site. I would not go that far. After all, individual people browsing the site with nefarious purposes in mind is something else than a bot systematically scraping the entire site (or sections thereof) for bad purposes. It's a matter of scale.
Hydromoth settings
9 May 2022 5:03pm
11 August 2023 8:10pm
Hi Ian,
I have hours of an unidentified creature recorded during overnight recording sessions with mutliple hydrophones. We think it is platypus but there is nothing to compare against that isn't from captive sounds. I am waiting on the Hydromoth to become available again so I can do longer term monitoring.
1 May 2024 5:26pm
Hi everyone, I just got my first hydromoth and wanted to test it for aquatic soundscape with interest in Tomistoma, Otter, boat traffic and maybe fishes too! But before that I maybe test it on zoos.
What are your advices, tips, or suggestion for first-time user? thank you!
1 May 2024 5:42pm
You won't get any audio if you don't allow enough time for the hydromoth/audiomoth to write. So when you do a continuous recording you need to experiment a little. I'm sure there is a formula to calculate this, but I haven't figure that out. I typically do 5 min recordings with 10 seconds of write/break time. I think this system is expecting you to subsample, so keep that in mind instead of a continuous recording.
I do 8kHz sampling and get about 7 days of data and then the voltage gets too low and you start getting SD card write errors and missing files.
In terms of analysis, I've had trouble understanding the directionality of the hydromoth and incorporating this into my studies. I always set up two at the same site to check the variability in my call detections and include this into my error analysis.
WILDLABS AWARDS 2024 - Underwater Passive Acoustic Monitoring (UPAM) for threatened Andean water frogs
30 March 2024 3:54pm
5 April 2024 12:13pm
Congratulations, very exciting! Keep us updated!
7 April 2024 6:09pm
This is so cool @Mauricio_Akmentins - congrats and look forward to seeing your project evolve!
1 May 2024 5:17pm
Congratulations! My first hydromoth was just arrived yesterday and so excited! Looking forward for the update from your project!!!
Elephant Collective Behaviour Project - Principal Investigator
1 May 2024 1:59pm
The Inventory User Guide
1 May 2024 12:46pm
Introducing The Inventory!
1 May 2024 12:46pm
2 May 2024 3:08pm
3 May 2024 5:33pm
17 May 2024 7:29am
Hiring Chief Engineer at Conservation X Labs
1 May 2024 12:19pm
Attaching a directional microphone to a Wildlife Acoustics ultrasonic recorder?
29 April 2024 4:47pm
30 April 2024 4:28pm
Hi Luke, sounds like an interesting project! One thing to note is the ultrasonic Wildlife Acoustics unit you're looking at is already fairly directional. Take a look at the horizontal directionality plot towards the bottom:
You can see that for the relevant frequencies for slow lorises ultrasonic calls (40-60 kHz), there is 25-30 dB difference between 0 and 180 horizontal degrees. It's not perfect, but is close to some directional mics, and if it works well enough for your project it would save a lot of time and testing!
If you do choose to integrate an external directional microphone, be careful with microphone placement to avoid potential ultrasonic reflections from any hard flat surface like a tree trunk, water surface, or the instrument housing itself. Here's an example of some echo calls from reflective surfaces from bat vocalizations:
It would be helpful to hear how you plan on obtaining behavioral information (and what kind) to correlate with vocalizations? Observations, cameras, biologgers, etc.? This could inform responses a bit more.
30 April 2024 6:19pm
Hi Jesse,
Thank you so much for your reply and for the fantastic knowledge and resources! I was unfamiliar with the plots, so thank you for providing some interpretation- I will have to work to better understanding them. This may change things (I was going off of experience from field work with the last iteration of this WA recorder which had omnidirectional recording) and I may choose to pilot the recorder without an external microphone this summer.
Regarding my plan for collecting behavioral data, I plan to follow 15 wild individuals in a reserve in Thailand (mostly dry evergreen and dry dipterocarp forest with some human modified areas). I intend to use instantaneous focal sampling to observe lorises in two shifts between 18:00-06:00h. During these focal follows I will record all behaviors at 5-min intervals and use all-occurrences sampling for social and feeding behaviors, using an established slow loris ethogram. Simultaneously, I plan to record vocalizations, with the help of a research assistant and field guide. So we will be carrying the recorder with us during behavioral data collection. I intend to match up the timestamped loris vocalizations with the behavioral data to understand the call's function.
30 April 2024 7:00pm
If you have the resources, I would suggest testing the sensitivity and directionality of the system at relevant frequencies both with and without an external mic, and let the results dictate which will be best for your case study.
Another thing to think about since you are manually taking the recordings is if a WA unit is really necessary. You're paying for the technology of a remote system without needing it. Other cheaper handheld recorders (such as Zoom recorders) could free up $$ for a higher quality directional microphone. Although of note is that common Zoom recorders like the H4n only sample up to 96kHz for which the upper frequency limit (48kHz) is getting very close to the frequencies you're likely wanting to measure.
InConversation: Season 1*Final Episode*
30 April 2024 11:38am
Delving into #tech4wildlife Innovation across East Africa with Sandra Maryanne & Catherine Njore
30 April 2024 11:37am
Job Opening: Associate Curator of International Conservation at North Carolina Zoo
29 April 2024 9:17pm
AI for wolf ID
29 April 2024 7:09pm
8 May 2024 5:09pm
Thank you for this advice!
If you need a speaker for Variety hour, I would be happy to talk about the work we are doing in the Conservation Evidence Group to use LLMs for finding and reviewing evidence of conservation actions.