Group

Data management and processing tools / Feed

Conservation tech work doesn't stop after data is collected in the field. Equally as important to success is navigating data management and processing tools. For the many community members who deal with enormous datasets, this group will be an invaluable resource to trade advice, discuss workflows and tools, and share what works for you.

discussion

Image analysis with volunteers

Hello! I'm working with volunteers on a pilot project using camera traps and PAMs to monitor a mixed species waterbird colony on an Army Corps of Engineers constructed island....

2 0

I have a little experience with Timelapse and would say it is definetely worth the invested time.

The developer Saul Greenberg has made a ton of documentation on its use and is also very approachable in person, if you have any issues.

I can only highly recommend it.

 

 

See full post
discussion

Jupyter Notebook: Aquatic Computer Vision

Dive Into Underwater Computer Vision Exploration OceanLabs Seychelles is excited to share a Jupyter notebook tailored for those intrigued by the...

3 0

This is quite interesting. Would love to see if we could improve this code using custom models and alternative ways of processing the video stream. 

This definitely seems like the community to do it. I was looking at the thread about wolf detection and it seems like people here are no strangers to image classification. A little overwhelming to be quite honest 😂

While it would be incredible to have a powerful model that was capable of auto-classifying everything right away and storing all the detected creatures & correlated sensor data straight into a database - I wonder if in remote cases where power (and therefore cpu bandwidth), data storage, and network connectivity is at a premium if it would be more valuable to just be able to highlight moments of interest for lab analysis later? OR if you do you have cellular connection, you could download just those moments of interest and not hours and hours of footage? 

Am working on similar AI challenge at the moment. Hoping to translate my workflow to wolves in future if needed. 

We all are little overstretched but it there is no pressing deadlines, it should be possible to explore building efficient model for object detection and looking at suitable hardware for running these model on the edge. 

 

 

See full post
discussion

Need advice - image management and tagging 

Hello Wildlabs,Our botany team is using drones to survey vertical cliffs for rare and endangered plants. Its going well and we have been able to locate and map many new...

6 0

I have no familiarity with Lightroom, but the problem you describe seems like a pretty typical data storage and look up issue.  This is the kind of problem that many software engineers deal with on a daily bases.  In almost every circumstance this class of problem is solved using a database.

In fact, a potentially useful analysis is that the Lightroom database is not providing the feature set you need.

It seems likely that you are not looking for a software development project, and setting up you own DB would certainly require some effort, but if this is a serious issue for your work, you hope to scale your work up, or bring many other participants into your project, it might make sense to have an information system that better fits your needs.

There are many different databases out there optimized for different sorts of things.  For this I might suggest taking a look at MongoDB with GridFS for a couple of reasons.

  1. It looks like you meta data is in JSON format.  Many DBs are JSON compatible, but Mongo is JSON native.  It is especially good at storing and retrieving JSON data.  Its JSON search capabilities are excellent and easy to use.  It looks like you could export your data directly from Lightroom into Mongo, so it might be pretty easy actually.
  2. Mongo with the GridFS package is an excellent repository for arbitrarily large image files.
  3. It is straightforward to make a Mongo database accessible via a website.
  4. They are open source (in a manner of speaking) and you can run it for free.

Disclaimer: I used to work for MongoDB.  I don't anymore and I have no vested interest at all, but they make a great product that would really crush this whole class of problem.

See full post
discussion

How are Outdoor Fire Detection Systems Adapted for Small Forest Areas, Considering the Predominance of Indoor Fire Detectors?

How are fire detection mechanisms tailored for outdoor environments, particularly in small forest areas, given that most fire and smoke detectors are designed for indoor use?

1 0

Fire detection is a sort of broad idea.  Usually people detect the products of fire, and most often this is smoke.

Many home fire detectors in the US use a radioactive source and measure the absorption of the radiation by the air.  More smoke means more absorption.

For outdoor fire detection, PM2.5 can be a very good smoke proxy, and outdoor PM2.5 sensing is pretty accessible.

This one is very popular in my area. 

 

See full post
discussion

Wildlife Conservation for "Dummies"

Hello WILDLBAS community,For individuals newly venturing into the realm of Wildlife Conservation, especially Software Developers, Computer Vision researchers, or...

3 4

Maybe this is obvious, but maybe it's so obvious that you could easily forget to include this in your list of recommendations: encourage them to hang out here on WILDLABS!  I say that in all seriousness: if you get some great responses here and compile them into a list, it would be easy to forget the fact that you came to WILDLABS to get those responses.

I get questions like this frequently, and my recommended entry points are always (1) attend the WILDLABS Variety Hour series, (2) lurk on WILDLABS.net, and (3) if they express a specific interest in AI, lurk on the AI for Conservation Slack.

I usually also recommend that folks visit the Work on Climate Slack and - if they live in a major city - to attend one of the in-person Work on Climate events.  You'll see relatively little conservation talk there, but conservation tech is just a small subset of sustainability tech, and for a new person in the field, if they're interested in environmental sustainability, even if they're a bit more interested in conservation than in other aspects of sustainability, the sheer number of opportunities in non-conservation-related climate tech may help them get their hands dirty more quickly than in conservation specifically, especially if they're looking to make a full-time career transition.  But of course, I'd rather have everyone working on conservation!

Some good overview papers I'd recommend include: 

I'd also encourage you to follow the #tech4wildlife hashtags on social media! 


 

 

I'm also here for this. This is my first comment... I've been lurking for a while.

I have 20 years of professional knowledge in design, with the bulk of that being software design. I also have a keen interest in wildlife. I've never really combined the two; and I'm starting to feel like that is a waste. I have a lot to contribute. The loss of biodiversity is terrifying me. So I’m making a plan that in 2024 I’m going to combine both.

However, if I’m honest with you – I struggle with where to start. There are such vast amounts of information out there I find myself jumping all over the place. A lot of it is highly scientific, which is great – but I do not have a science background.

As suggested by the post title.. a “Wildlife Conservation for Dummies” would be exactly what I am looking for. Because in this case I’m happy to admit I am a complete dummy.

See full post
discussion

Opinions or experience with Firetail movement analysis software?

Hey everyone,Does anyone here have any experience or opinions on Firetail for processing/analyzing movement data? I have always used R with all of my movement data but I have been...

7 0

Hi Travis! 

I'm a developer in the Firetail team and also worked with R a lot during my PhD. 

The goals of both projects are quite different. Using Firetail definitely does not mean you can no longer use R or vice versa. Firetail's focus is on the interactive, visual exploration and annotation of your data. It is meant to be used by scientists, conservationists or stakeholders analysing their projects. 

It may be used to pinpoint regions/time-windows and visualize data suitable for downstream analysis in R, or generate reports regularily. Firetail won't replace algorithm X using a distinct set of parameters as required by reviewer R, but it will help to understand your data and tell the story.

The basic workflows of Firetail are meant to be intuitive and we seek to support a wide range of data out of the box (plus, 1:1 customer service when you run into problems). 
We also implement additional workflows based on ideas that we receive from you all and seek to integrate interfaces to whatever upstream/downstream tools you require for your daily work.

Feel free to contact me ([email protected]) for specific questions or just use this thread :)

Best,
Tobias

Hi Tobias!

 

This is great to hear. This seems to be exactly what I am looking for as I approach my accelerometry data, looking to identify certain behaviors through thresholds then manually verify. This sounds like a great compliment to what I've done in R with the data so far. Thanks for the info! I will most definitely give this a try!

I may take you up on the offer of emailing you with a couple quick questions once I start (I appreciate that!)

 

Best,

Travis

See full post
discussion

Automatic extraction of temperature/moon phase from camera trap video

Hey everyone, I'm currently trying to automate the annotation process for some camera trap videos by extracting metadata from the files (mp4 format). I've been tasked to try...

7 0

Hi Lucy

As others have mentioned, camera trap temperature readouts are inaccurate, and you have the additional problem that the camera's temperature can rise 10C if the sun shines on it.

I would also agree with the suggestion of getting the moon phase data off the internet.

 

Do you need to do this for just one project?  And do you use the same camera make/model for every deployment?  Or at least a finite number of camera makes/models?  If the number of camera makes/models you need to worry about is finite, even if it's large, I wouldn't try to solve this for the general case, I would just hard-code the pixel ranges where the temperature/moon information appears in each camera model, so you can crop out the relevant pixels without any fancy processing.  From there it won't be trivial, exactly, but you won't need AI. 

You may need separate pixel ranges for night/day images for each camera; I've seen cameras that capture video with different aspect ratios at night/day (or, more specifically, different aspect ratios for with-flash and no-flash images).  If you need to determine whether an image is grayscale/color (i.e., flash/no-flash), I have a simple heuristic function for this that works pretty well.

Assuming you can manually define the relevant pixel ranges, which should just take a few minutes if it's less than a few dozen camera models, I would extract the first frame of each video to an image, then crop out the temperature/moon pixels.

Once you've cropped out the temperature/moon information, for the temperature, I would recommend using PyTesseract (an OCR library) to read the characters.  For the moon information... I would either have a small library of images for all the possible moon phases for each model, and match new images against those, or maybe - depending on the exact style they use - you could just, e.g., count the total number of white/dark pixels in that cropped moon image, and have a table that maps "percentage of white pixels" to a moon phase.  For all the cameras I've seen with a moon phase icon, this would work fine, and would be less work than a template matching approach.

FYI I recently wrote a function to do datetime extraction from camera trap images (it would work for video frames too), but there I was trying to handle the general case where I couldn't hard-code a pixel range.  That task was both easier and harder than what you're doing here: harder because I was trying to make it work for future, unknown cameras, but easier because datetimes are relatively predictable strings, so you know when you find one, compared to, e.g., moon phase icons.

In fact maybe - as others have suggested - extracting the moon phase from pixels is unnecessary if you can extract datetimes (either from pixels or from metadata, if your metadata is reliable).

camtrapR has a function that does what you want. i have not used it myself but it seems straightforward to use and it can run across directories of images:

https://jniedballa.github.io/camtrapR/reference/OCRdataFields.html

See full post
discussion

DeepFaune: a software for AI-based identification of mammals in camera-trap pictures and videos

Hello everyone, just wanted to advertise here the DeepFaune initiative that I lead with Vincent Miele. We're building AI-based species recognition models for camera-trap...

6 4

Hello to all, new to this group. This is very exciting technology. can it work for ID of individual animals? we are interested in Ai for identifying individual jaguars (spots) and andean Bears (face characteristics). Any recommendation? contact? thanks!

German

That's a very interesting question and use case (I'm not from deepfaune). I'm playing with this at the moment and intend to integrate it into my other security software that can capture and send video alerts. I should have this working within a few weeks I think.

The structure of that software is that it is two stage, the first stage identifies that there is an animal and it's bounding box and then there's a classification stage. I intend to merge the two stages so that it behaves like a yolo model so that the output is bounding boxes as well as what type of animal it is.

However, my security software can cascade models. So if you were able to train a single stage classifier that identifies your particular bears, then you could cascade all of these models in my software to generate an alert with a video saying which bear it was.

Hi @GermanFore ,

I work with the BearID Project on individual identification of brown bears from faces. More recently we worked on face detection across all bear species and ran some tests with identifying Andean bears. You can find details in the paper I linked below. We plan to do more work with Andean bears in 2024.

I would love to connect with you. I'll send you a message with my email address.

Regards,

Ed

See full post
careers

ZSL Research Fellow (x3 roles)

Zoological Society of London
The Institute of Zoology (IoZ), the research division of the Zoological Society of London (ZSL), is seeking to fill three new permanent positions by recruiting outstanding early-career researchers as Research Fellows (...

0
See full post
Link

Pre-register for the Basics of R for Ecologists

Enrollment for Luke Negoita's Basics of R (for ecologists) course is opening in a few days, but this will be the LAST TIME (for a while) that he'll be admitting new students into the course. If you've needed to learn R for ecology, this course has everything you need (his words!)

0
Link

Marine Flyways - Seabird Tracking Database

To celebrate #WMBD, BirdLife is excited to share the newly identified Marine Flyways!! Seabird tracking data were shared by over 60 researchers from 48 long-distance migratory species and have revealed SIX MarineFlyways. They've created an awesome animation to go along with it!

0