Open Source Solutions / Feed

This group is a place to share low-cost, open-source devices for conservation; describe how they are being used, including what needs they are addressing and how they fit in to the wider conservation tech market; identify the obstacles in advancing the capacity of these technologies; and to discuss the future of these solutions - particularly their sustainability and how best to collaborate moving forward.


CollarID: multimodal wearable sensor system for wild and domesticated dogs

Hi Everyone! I (and my team) are new to the WildLabs network so we'd like to post an early-stage project we've been working on to get some feedback!  SummaryThe...

2 5

Hi Patrick, 

This is so cool, thanks for sharing! It's also a perfect example of what we were hoping to capture in the R&D section of the inventory - I've created a new entry for #CollarID so it's discoverable and so we can track how it evolves across any mentions in different posts/discussions that come up on WILDLABS. This thread appears on the listing, and I'll make you three the contacts for it too. But please do go in and update any of the info there as well! 


See full post

WILDLABS AWARDS 2024 - Underwater Passive Acoustic Monitoring (UPAM) for threatened Andean water frogs

In our project awarded with the "2024 WILDLABS Awards", we will develop the first Underwater Passive Acoustic Monitoring (UPAM) program to assess the conservation status and for...

5 15

This is so cool @Mauricio_Akmentins - congrats and look forward to seeing your project evolve!

Congratulations! My first hydromoth was just arrived yesterday and so excited! Looking forward for the update from your project!!!

See full post

Introducing The Inventory!

The Inventory is your one-stop shop for conservation technology tools, organisations, and R&D projects. Start contributing to it now!

5 14
This is fantastic, congrats to the WildLabs team! Look forward to diving in.
Hi @JakeBurton,thanks for your great work on the Inventory!Would it be possible to see or filter new entries or reviews?Greetings from Austrian forest,Robin 
See full post

Pytorch-Wildlife: A Collaborative Deep Learning Framework for Conservation (v1.0)

Welcome to Pytorch-Wildlife v1.0At the core of our mission is the desire to create a harmonious space where conservation scientists from all over the globe can unite, share, and...

11 5

Hi everyone! @zhongqimiao was kind enough to join Variety Hour last month to talk more about Pytorch-Wildlife, so the recording might be of interest to folks in this thread. Catch up here: 

Hi @zhongqimiao ,

Might you have faced such an issue while using mega detector

The conflict is caused by:
pytorchwildlife depends on torch==1.10.1
pytorchwildlife depends on torch==1.10.1
pytorchwildlife depends on torch==1.10.1


if yes how did you solve it, or might you have any ideas?

torch 1.10.1 doesn't seem to exist

See full post

WILDLABS AWARDS 2024 - No-code custom AI for camera trap species classification

We're excited to introduce our project that will enable conservationists to easily train models (no code!) that they can use to identify species in their camera trap images.As we...

7 5

Happy to explain for sure. By Timelapse I mean images taken every 15 minutes, and sometimes the same seals (anywhere from 1 to 70 individuals) were in the image for many consecutive images. 

Got it. We should definitely be able to handle those images. That said, if you're just looking for counts, then I'd recommend running Megadetector which is an object detection model and outputs a bounding box around each animal.

Hi, this is pretty interesting to me. I plan to fly a drone over wild areas and look for invasive species incursions. So feral hogs are especially bad, but in the Everglades there is a big invasion of huge snakes. In various areas there are big herds of wild horses that will eat themselves out of habitat also, just to name a few examples. Actually the data would probably be useful in looking for invasive weeds, that is not my focus but the government of Canada is thinking about it.

Does your research focus on photos, or can you analyze LIDAR? I don't really know what emitters are available to fly over an  area, or which beam type would be best for each animal type. I know that some drones carry a LIDAR besides a camera for example. Maybe a thermal camera would be best to fly at night.

See full post


 We are incredibly thankful to WILDLABS and Arm for selecting the MothBox for the 2024 WILDLABS Awards.  The MothBox is an automated light trap that attracts and...

7 5

Already an update from @hikinghack

Yeah we got it about as bare bones as possible for this level of photo resolution and duration in the field. The main costs right now are:


Pi- $80

Pijuice -$75

Battery - $85

64mp Camera - $60

which lands us at $300 already. But we might be able to eliminate that pijuice and have fewer moving parts, and cut 1/4 of our costs! Compared to something like just a single logitech brio camera that sells for $200 and only gets us like 16mp, we are able to make this thing as cheap as we could figure out! :)

See full post

WILDLABS AWARDS 2024 - TimeLord: A low-cost, low-power and low-difficulty timer board to control battery-powered devices

Hi everyone,@Alasdair from Arribada Initiative and I are so pleased to announce our TimeLord project as one of the lucky winners of this year's WILDLABS Awards. What is TimeLord...

14 14

Thanks @Freaklabs, I think you'll really enjoy getting involved with this too as we're looking for input from makers in the community to get the most from the approach and to capture features and usability ideas from a large number of people.

I've a new modular drop-off tag build using @Rob_Appleby's original SensorDrop board that I think would be great for this project too to see if we can drop different compartments, or do various different timed events with the one TimeLord board.

Most importantly, we have to make it play a MIDI version of the DoctorWho theme song when you arm the device. That has to be the #1 feature if you ask me!


See full post

The Variety Hour: 2024 Lineup

You’re invited to the WILDLABS Variety Hour, a monthly event that connects you to conservation tech's most exciting projects, research, and ideas. We can't wait to bring you a whole new season of speakers and...

See full post

Passionate engineer offering funding and tech solutions pro-bono.

My name is Krasi Georgiev and I run an initiative focused on providing funding and tech solutions for stories with a real-world impact. The main reason is that I am passionate...

2 1

Hi Krasi! Greetings from Brazil!

That's a cool journey you've started! Congratulations. And I felt like theSearchLife resonates with the work I'm involved round here. In a nutshell, I live at the heart of the largest remaining of Atlantic forest in the planet - one of the most biodiverse biomes that exist. The subregion where I live is named after and bathed by the "Rio Sagrado" (Sacred River), a magnificent water body with a very rich cultural significance to the region (it has served as a safe zone for fleeing slaves). Well, the river and the entire bioregion is currently under the threat of a truly devastating railroad project which, to say the least is planned to cut through over 100 water springs! 

In face of that the local community (myself included) has been mobilizing to raise awareness of the issue and hopefully stop this madness (fueled by strong international forces). One of the ways we've been fighting this is through the seeking of the recognition of the sacred river as an entity of legal rights, who can manifest itself in court, against such threats. And to illustrate what this would look like, I've been developing this AI (LLM) powered avatar for the river, which could maybe serve as its human-relatable voice. An existing prototype of such avatar is available here. It has been fine-tuned with over 20 scientific papers on the Sacred River watershed.

And right now myself and other are mobilizing to manifest the conditions/resources to develop a next version of the avatar, which would include remote sensing capacities so the avatar is directly connected to the river and can possibly write full scientific reports on its physical properties (i.e. water quality) and the surrounding biodiversity. In fact, myself and 3 other members of the WildLabs community have just applied to the WildLabs Grant program in order to accomplish that. Hopefully the results are positive.

Finally, it's worth mentioning that our mobilization around providing an expression medium for the river has been multimodal, including the creation of a shortfilm based on theatrical mobilizations we did during a fest dedicated to the river and its surrounding more-than-human communities. You can check that out here:


Let's chat if any of that catches your interest!


Hi Danilo. you seem very passionate about this initiative which is a good start.
It is an interesting coincidence that I am starting another project for the coral reefs in the Philipines which also requires water analytics so I can probably work on both projects at the same time.

Let's that have a call and discuss, will send you a pm with my contact details

There is a tech glitch and I don't get email notifications from here.

See full post

Monitoring setup  in the forest based on the wifi with 2.4 GHz frequency.

I am planning to setup the network using the wireless with frequency 2.4GHz. Can I get the the data for this signal distortion in the forest area?Is there any any special...

5 0

Hi Dilip,

I do not have data about signal distortion in a forest area and with the signal you are intended to use.

However, in a savannah environment, when I put a tour on the highest point of the park, Lora signal (avg 900MHz) is less distorted than WiFi signal (2.4GHz). This is normal as a physics law: the frequency determines the wave length, and the less the length (obviously the less the frequency), the less obstructed the signal.

So, without interfering with your design, I would say that in a forest configuration, WiFi will need more access points deployed and may be more costly, and in your context, even when using LoRa, you will need more gateways than I have in a savannah.

To design the approximate number of gateways, you may need to use terrain Visibility analysis.

To design the cameras deployment, you will need to comply with the sampling methods defined in your research. However, if it is on for surveillance reasons, you may need to rely on terrain visibility analysis also.

Best regards.

I've got quite a lot of experience with wireless in forested areas and over long(ish) ranges.

Using a wifi mesh is totally possible, and it will work.  You will likely not get great range between units.  You will likely need to have your mesh be fairly adaptable as conditions change.

Wireless and forests interact in somewhat unpredictable ways it turns out.  Generally, wireless is attenuated by water in the line-of-sight between stations.  From the Wifi perspective, a tree is just a lot of water up in the air.  Denser forest = more water = worse communications. LoRa @ 900Mhz is less prone to this issue than Wifi @ 2.4Ghz and way less prone than Wifi @ 5Ghz.  But LoRa is also fairly low data rate.  Streaming video via LoRa is possible with a lot of work, but video streaming is not at all what LoRa was build to do, and it does it quite poorly at best.

The real issue I see here is to do with power levels.  CCTV, audio streaming, etc are high data rate activities.  You may need quite a lot of power to run these systems effectively both for the initial data collection and then for the communications.

If you are planning to run mains power to each of these units, you may be better off running an ethernet cable as well.  Alternatively, you can run "power line" networking, which has remarkably good bandwidth and gets you back down to a single twisted pair for power and communications.

If you are planning to run off batteries and/or solar, you may need a somewhat large power system to support your application?


I would recommend going with Ubiquity 2.4Ghz devices which have performed relatively well in dense foliage of the California Redwood forests. It took a lot of tweaking to find paths through the dense tree cover as mentioned in the previous posts. 


See full post

How are Outdoor Fire Detection Systems Adapted for Small Forest Areas, Considering the Predominance of Indoor Fire Detectors?

How are fire detection mechanisms tailored for outdoor environments, particularly in small forest areas, given that most fire and smoke detectors are designed for indoor use?

1 0

Fire detection is a sort of broad idea.  Usually people detect the products of fire, and most often this is smoke.

Many home fire detectors in the US use a radioactive source and measure the absorption of the radiation by the air.  More smoke means more absorption.

For outdoor fire detection, PM2.5 can be a very good smoke proxy, and outdoor PM2.5 sensing is pretty accessible.

This one is very popular in my area. 


See full post

Open-source kinetic energy harvesting collar - Kinefox

Hello everyone,I ran across an article today (at the bottom) that talks about an open-source, kinetic energy harvesting collar ("Kinefox"). It sounds pretty neat...anyways,...

6 3

This is super cool! 

I was wondering if the development will further touch marine or aquatic animals, make it like water wheel (even might give burden to aerodynamic). Thank you for sharing!



See full post

Recycled & DIY Remote Monitoring Buoy

Hello everybody, My name is Brett Smith, and I wanted share an open source remote monitoring buoy we have been working on in Seychelles as part of our company named "...

2 1

Hello fellow Brett. Cool project. You mentioned a waterseal testing process. Is there documentation on that?

I dont have anything written up but I can tell what parts we used and how we tested.

Its pretty straightforward, we used this M10 Enclosure Vent from Blue Robotics:


Along with this nipple adapter:

Then you can use any cheap hand held break pump to connect to your enclosure. You can pump a small vacuum in and make sure the pressure holds.

Here's a tutorial video from blue robotics:


Let me know if you have any questions or if I can help out.

See full post

Automatic extraction of temperature/moon phase from camera trap video

Hey everyone, I'm currently trying to automate the annotation process for some camera trap videos by extracting metadata from the files (mp4 format). I've been tasked to try...

7 0

Hi Lucy

As others have mentioned, camera trap temperature readouts are inaccurate, and you have the additional problem that the camera's temperature can rise 10C if the sun shines on it.

I would also agree with the suggestion of getting the moon phase data off the internet.


Do you need to do this for just one project?  And do you use the same camera make/model for every deployment?  Or at least a finite number of camera makes/models?  If the number of camera makes/models you need to worry about is finite, even if it's large, I wouldn't try to solve this for the general case, I would just hard-code the pixel ranges where the temperature/moon information appears in each camera model, so you can crop out the relevant pixels without any fancy processing.  From there it won't be trivial, exactly, but you won't need AI. 

You may need separate pixel ranges for night/day images for each camera; I've seen cameras that capture video with different aspect ratios at night/day (or, more specifically, different aspect ratios for with-flash and no-flash images).  If you need to determine whether an image is grayscale/color (i.e., flash/no-flash), I have a simple heuristic function for this that works pretty well.

Assuming you can manually define the relevant pixel ranges, which should just take a few minutes if it's less than a few dozen camera models, I would extract the first frame of each video to an image, then crop out the temperature/moon pixels.

Once you've cropped out the temperature/moon information, for the temperature, I would recommend using PyTesseract (an OCR library) to read the characters.  For the moon information... I would either have a small library of images for all the possible moon phases for each model, and match new images against those, or maybe - depending on the exact style they use - you could just, e.g., count the total number of white/dark pixels in that cropped moon image, and have a table that maps "percentage of white pixels" to a moon phase.  For all the cameras I've seen with a moon phase icon, this would work fine, and would be less work than a template matching approach.

FYI I recently wrote a function to do datetime extraction from camera trap images (it would work for video frames too), but there I was trying to handle the general case where I couldn't hard-code a pixel range.  That task was both easier and harder than what you're doing here: harder because I was trying to make it work for future, unknown cameras, but easier because datetimes are relatively predictable strings, so you know when you find one, compared to, e.g., moon phase icons.

In fact maybe - as others have suggested - extracting the moon phase from pixels is unnecessary if you can extract datetimes (either from pixels or from metadata, if your metadata is reliable).

camtrapR has a function that does what you want. i have not used it myself but it seems straightforward to use and it can run across directories of images:

See full post