Group

AI for Conservation / Feed

Artificial intelligence is increasingly being used in the field to analyse information collected by wildlife conservationists, from camera trap and satellite images to audio recordings. AI can learn how to identify which photos out of thousands contain rare species; or pinpoint an animal call out of hours of field recordings - hugely reducing the manual labour required to collect vital conservation data.

discussion

BirdWeather | PUC

Hi Everyone,I just found out about this site/network!I wanted to introduce myself - I'm the CEO of a little company called Scribe Labs.  We're the small team behind...

3 5

I love the live-stream pin feature!

Hi Tim, I just discovered your great little device and about to use it for the first time this weekend. Would love to be directly in touch since we are testing it out as an option to recommend to our clients :) Love that it includes Australian birds! Cheers Debbie

Hi @timbirdweather I've now got them up and running and winding how I can provide feedback on species ID to improve the accuracy over time. It would be really powerful to have a confirmation capability when looking at the soundscape options to confirm which of the potential species it actually is or confirm it is neither to help develop the algorithms.

Also, is it possible to connect the PUC to a mobile hotspot to gather data for device that isn't close to wifi? And have it so that it can detect either wifi or hotspot when in range? Thanks!

See full post
discussion

AI for Conservation!

Hi everybody!I just graduated in artificial intelligence (master's) after a bachelor's in computer engineering. I'm absolutely fascinated by nature and wildlife and I'm trying to...

4 3
See full post
discussion

Labelled Terrestrial Acoustic Datasets

Hello all,I'm working with a team to develop an on-animal acoustic monitoring collar. To save power and memory, it will have an on board machine learning detector and classifier...

13 0

Thanks for sharing Kim.

We're using <1 mA while processing, equating to ~9 Ah running for a year. The battery is a Tadiran TL-5920 C 3.6V Lithium, providing 8.6 Ah, plus we will a small (optional) solar panel. We also plan to implement a threshold system, in which the system is asleep until noise level crosses a certain threshold and wakes up.

The low-power MCU we are using is https://ambiq.com/apollo4/ which has a built-in low power listening capability.

<1 mA certainly sounds like a breakthrough for this kind of device. I hope you are able to report back  with some real world performance information about your project @jcturn3 . Sounds very promising. Will the device run directly off the optional solar cell or will you include a capacitor since you cannot recharge the lithium thionyl chloride cell. I had trouble obtaining the Tadarian TL-5920 cells in Australia (they would send me old SL-2770s though) so I took a gamble on a couple of brands of Chinese cells (EVE and FANSO) which seemed to perform the same job without a hitch. Maybe in the USA you can get Israeli cells more easily than Chinese ones? 

Message me if you think some feeding sounds, snoring, grooming and heart sounds of koalas would be any use for your model training.

Really interesting project. Interesting chip set you found. With up to around 2mb sram that’s quite a high memory for a  ultra low power soc I think.

It might also be interesting while doing your research thinking about if there are any other requirements people could have for such a platform with a view towards more mass usage later. Thanks for sharing.

See full post
discussion

Successfully integrated deepfaune into video alerting system

Hi all, I've successfully integrated deepfaune into my Video alerting full-features security system StalkedByTheState. The yellow box around the image represents the zone of...

14 0

As I understand it, the deepfaune's first pass is an object detector was based on megadetector, @schamaille  could explain it exactly. In short though, it's output is standard yolo like in terms of properties. From this I use standard opencv code to snip out the individual matches and pass them to the second stage, which is a classifier.

My code needs a bit of cleaning up before I can release it, also it needs to be made more robust  for some situations. Also, I'm waiting to hear if I got anywhere with wildlab awards as it would affect my plans going forward. And this could be anything up till the end of next month, though at a wild guessing I'm guessing next week at the UN WWD or at the wildlabs get together :) Anyone else have any theories ?

Also, my code is a little  more complex because I abstract the interface to a network based API.

Finally, I don't want to take the wind out of my sails, I would like to launch my integration in time with the release of the Orin based version of my StalkedByTheState software, the usage of which I'm trying to promote. To release earlier take's some of the oomph out of this.

But maybe we can have a video call sometime and we can have a chat about this?

In the DeepFaune final paper, it's mentioned that the team developed their own observation type model (detector) based on YOLOv8s, utilizing the cropping information provided by MegaDetectorV5a.

Therefore, for the initial phase, I'm also utilizing the YOLO interface (from Ultralytics) to load the deepfaune-yolov8s_960.pt model and perform the prediction procedure. The results list contains one or more bounding boxes with class ID (animal, person, vehicle) and probability values.

For each object detection, I crop and resize the original image to the area of the bounding box, execute the preprocessImage transformation, and utilize the predictOnBatch method (both from the Classifier class which load deepfaune-vit_large_patch14_dinov2.lvd142m.pt in the background) to obtain scores for species-level classification for each individual bounding box.

This approach could prove valuable to other users seeking to integrate two-step DeepFaune detection and classification into their pipelines or APIs.

Absolutely! I pretty much do the same thing, the resizing step I think relates to what I still have to do. Some large images caused my code to crash.

I want to take it one step further, and that's one of the reasons I want to talk to Microsoft about, I'd like to encourage the abstraction of the object detection with the network API approach I developed as that would mean that any new models anyone developed would simply work out of the box with no additional work with my video alerting software. To that end I need to have a chat to see if they agree with the added value, if so they could potentially add this wrapper around their code and all of those models would be available to alert on and to use is simple Python scripts in other peoples pipelines.

Anyway. That's the plan.

See full post
discussion

Pytorch-Wildlife: A Collaborative Deep Learning Framework for Conservation (v1.0)

Welcome to Pytorch-Wildlife v1.0At the core of our mission is the desire to create a harmonious space where conservation scientists from all over the globe can unite, share, and...

9 3

Hello @hjayanto , You are precisely the kind of collaborator we are looking to work with closely to enhance the user-friendliness of Pytorch-Wildlife in our upcoming updates. Please feel free to send us any feedbacks either through the Github issue or here! We aim to make Pytorch-Wildlife more accessible to individuals with limited to no engineering experience. Currently, we have a Huggingface demo UI (https://huggingface.co/spaces/AndresHdzC/pytorch-wildlife) to showcase the existing functionalities in Pytorch-Wildlife. Please let us know if you encounter any issues while using the demo. We are also in the process of preparing a tutorial for those interested in Pytorch-Wildlife. We will keep you updated on this!

See full post
discussion

Needing help from the community: Bioacoustics survey

I'm reaching out because I'm currently conducting a research project titled "UX-Driven Exploration of Unsupervised Deep-Learning in Marine Mammals Bioacoustic Conservation" for my...

2 0

Was great to chat with you Sofia and I would encourage others in the Acoustics community to help provide input for Sofia's study!

Thank you so much for your encouraging words! I'm thrilled to hear that you enjoyed our conversation, and I truly appreciate your support in spreading the word about my survey within the Acoustics community. Input from individuals like yourself is incredibly valuable to my study, and I'm eager to gather as much insight as possible. If you know of anyone else who might be interested in participating, please feel free to share the survey link with them. Once again, thank you for your support—it means a lot to me!

Best regards,
Sofia

See full post
discussion

Tools for automating image augmentation 

Does anyone know of tools to automate image augmentation and manipulation. I wish to train ML image recognition models with images in which the target animal (and false targets)...

11 0

Hi @arky !

Thanks for your reply.

I am running into pytorch/torchvision incompatibility issues when trying to run your script.

Which versions are you using?

Best regards,

Lars

 

@Lars_Holst_Hansen  Here is the information you requested. Also run Yolov8 in multiple remote environments without any issues.  Perhaps you'll need to use a virtual environment (venv et al) or conda to remedy incompatibility issues. 

$ yolo checks
Ultralytics YOLOv8.1.4 🚀 Python-3.10.12 torch-1.13.1+cu117 CUDA:0 (Quadro T2000, 3904MiB)
Setup complete ✅ (16 CPUs, 62.5 GB RAM, 465.0/467.9 GB disk)

OS                  Linux-6.5.0-17-generic-x86_64-with-glibc2.35
Environment         Linux
Python              3.10.12
Install             pip
RAM                 62.54 GB
CPU                 Intel Core(TM) i7-10875H 2.30GHz
CUDA                11.7

matplotlib          ✅ 3.5.1>=3.3.0
numpy               ✅ 1.26.3>=1.22.2
opencv-python       ✅ 4.7.0.72>=4.6.0
pillow              ✅ 10.2.0>=7.1.2
pyyaml              ✅ 6.0.1>=5.3.1
requests            ✅ 2.31.0>=2.23.0
scipy               ✅ 1.11.4>=1.4.1
torch               ✅ 1.13.1>=1.8.0
torchvision         ✅ 0.14.1>=0.9.0
tqdm                ✅ 4.66.1>=4.64.0
psutil              ✅ 5.9.8
py-cpuinfo          ✅ 9.0.0
thop                ✅ 0.1.1-2209072238>=0.1.1
pandas              ✅ 1.5.3>=1.1.4
seaborn             ✅ 0.12.2>=0.11.0
See full post
discussion

Mass Detection of Wildlife Snares Using Airborne Synthetic Radar

Mass Detection of Wildlife Snares Using Airborne Synthetic RadarFor the last year my colleauges Prof. Mike Inggs (Radar - Electrical Engineering, Unviversity of Cape Town) and...

9 1

Operating at 2GHz the radar penetrates vegetation so could see through canopy, but not through trunks of trees. However snares are typically set in groups, so one could maximise chance of locating all snares by carrying out a circular/spiral flight path after detection of a potential snare to locate others

Hi David,

I assume this will only work with wire (metal) snares? We often see snares made of nylon rope (used for lucern bales) in the field, which I assume will be missed by the radar?

Cheers,

Chavoux

Hi David, would love to collaborate with you on this topic. A few years ago Dr. Nick van Doormaal did his PhD on snaring with us and we ran a number of experiments on the detection of snares in a real world scenario using trained anti-poaching teams. I think it would be quite simple to replicate the study and then look at the efficacy of remote sensing vs human detection. Let me know if you are interested in chatting further!

See full post
discussion

ChatGPT for conservation

Hi, I've been wondering what this community's thoughts are on ChatGPT? I was just having a play with it and asked:"could you write me a script in python that loads photos and...

40 9

The greatest issue with ChatGPT is GIGO (Garbage in, garbage out). It doesn't matter how good the machine learning algorithm is, if it gets fed bad information (data) it will regurgitate bad information. One obvious problem is that it does not reference its information sources. So some of it might be established beyond any doubt, but then it includes something it made up out of thin air with an equally authoritative tone. Because at bottom, ChatGPT is still a dumb machine (or collection of machines) that has to be told what to do by its programmers. It can be useful, but for conservation issues that can have far-reaching implications, I will not trust it. It could be really useful with the addition of two measures (maybe one has already been implemented?):

  1. The option to show the references for all sources (for each statement that it makes; and if it makes its own logical deduction, show that explicitly).
  2. Either weighing or restricting its input to sources that has been checked (e.g. peer-reviewed articles) for at least its scientific output (maybe/hopefully Google is already doing this).

I think with the addition of these two functions it will really become useful to conservation. But we are not there yet. In the meantime it is similar to Wikipedia, maybe a good a starting point for further research.

Just so you know, I uploaded both a photo without a cat and one with a cat in the picture and ask if there was a cat in the picture it got it correct both times.

 

Uploading pictures to wildlabs doesn't seem to work at this time, so I can't show you the response, but the second answer with the cat in the picture it answered with:

"Yes, there is a cat in this picture. It appears to be in the middle of the driveway."

You can already achieve both of them with your prompt. 
Or, if you're not using ChatGPT specifically but another LLM that you can fine tune, you can use RAG or fine tuning to extra train the algorithm on the data you want it to extract information from.
With ChatGPT you can create your custom GPT now.

See full post
Link

Apply Now: UW Data Science for Social Good Projects

Sign up for Data Science for Social Good 2024! This summer program is a great opportunity to get dedicated data science support on a conservation (tech) project or to get rich experience as a student in the field. More info in the link - student apps due 2/12, projects due 2/20.

0
discussion

Conservation Technology for Human-Wildlife Conflict in Non-Protected Areas: Advice on Generating Evidence

Hello,I am interested in human-dominated landscapes around protected areas. In my case study, the local community does not get compensation because they are unable to provide...

2 0

This is an area where my system would do very well in:



 

 

Also, as you mention areas dominated by humans, there is a high likelyhood that there will be enough power there to support this system, which provides very high performance and flexibility but it comes with a power and somewhat a cost cost.



Additionally, it's life blood comes with generating alerts and making security and evidence gathering practical and manageable, with it's flexible state management system.



Ping me offline if you would like to have a look at the system.

Hi Amit,

The most important thing is that the livestock owners contact you as soon as possible after finding the carcass. We commonly do two things if they contact us on the same day or just after the livestock was killed:

  1. Use CyberTracker (or similar software) on an Android smart phone to record all tracks, bite marks, feeding pattern and any other relevant signs of the reason for the loss with pictures and GPS coordinates. [BTW, Compensation is a big issue -- What do you do if the livestock was stolen? What do you do if a domestic animal killed the livestock? What if it died from disease or natural causes and was scavenged upon by carnivores afterwards?]
  2. In the case of most cats, they would hide the prey (or just mark it by covering it with grass or branches and urinating in the area). In this case you can put up a camera trap on the carcass to capture the animal when it returns to its kill (Reconyx is good if you can afford it - we use mostly Cuddeback with white flash). This will normally only work if the carcass is fresh (so other predators would not be able to smell it and not know where it is yet), so the camera only has to be up for 3-5 days max.

This is not really high-tech, but can be very useful to not only establish which predator was responsible (or if a predator was responsible), but also to record all the evidence for that.

See full post
discussion

AI volunteer work

Hello All, I have recently joined this group and going through the current feeds and discussions i already feel that its the right group i'm search for sometime.I'm a software...

0
See full post
discussion

Passionate engineer offering funding and tech solutions pro-bono.

My name is Krasi Georgiev and I run an initiative focused on providing funding and tech solutions for stories with a real-world impact. The main reason is that I am passionate...

2 1

Hi Krasi! Greetings from Brazil!



That's a cool journey you've started! Congratulations. And I felt like theSearchLife resonates with the work I'm involved round here. In a nutshell, I live at the heart of the largest remaining of Atlantic forest in the planet - one of the most biodiverse biomes that exist. The subregion where I live is named after and bathed by the "Rio Sagrado" (Sacred River), a magnificent water body with a very rich cultural significance to the region (it has served as a safe zone for fleeing slaves). Well, the river and the entire bioregion is currently under the threat of a truly devastating railroad project which, to say the least is planned to cut through over 100 water springs! 



In face of that the local community (myself included) has been mobilizing to raise awareness of the issue and hopefully stop this madness (fueled by strong international forces). One of the ways we've been fighting this is through the seeking of the recognition of the sacred river as an entity of legal rights, who can manifest itself in court, against such threats. And to illustrate what this would look like, I've been developing this AI (LLM) powered avatar for the river, which could maybe serve as its human-relatable voice. An existing prototype of such avatar is available here. It has been fine-tuned with over 20 scientific papers on the Sacred River watershed.



And right now myself and other are mobilizing to manifest the conditions/resources to develop a next version of the avatar, which would include remote sensing capacities so the avatar is directly connected to the river and can possibly write full scientific reports on its physical properties (i.e. water quality) and the surrounding biodiversity. In fact, myself and 3 other members of the WildLabs community have just applied to the WildLabs Grant program in order to accomplish that. Hopefully the results are positive.



Finally, it's worth mentioning that our mobilization around providing an expression medium for the river has been multimodal, including the creation of a shortfilm based on theatrical mobilizations we did during a fest dedicated to the river and its surrounding more-than-human communities. You can check that out here:



 

https://vimeo.com/manage/videos/850179762



 

Let's chat if any of that catches your interest!

Cheers!

Hi Danilo. you seem very passionate about this initiative which is a good start.
It is an interesting coincidence that I am starting another project for the coral reefs in the Philipines which also requires water analytics so I can probably work on both projects at the same time.

Let's that have a call and discuss, will send you a pm with my contact details

There is a tech glitch and I don't get email notifications from here.

See full post
discussion

Jupyter Notebook: Aquatic Computer Vision

Dive Into Underwater Computer Vision Exploration OceanLabs Seychelles is excited to share a Jupyter notebook tailored for those intrigued by the...

3 0

This is quite interesting. Would love to see if we could improve this code using custom models and alternative ways of processing the video stream. 

This definitely seems like the community to do it. I was looking at the thread about wolf detection and it seems like people here are no strangers to image classification. A little overwhelming to be quite honest 😂

While it would be incredible to have a powerful model that was capable of auto-classifying everything right away and storing all the detected creatures & correlated sensor data straight into a database - I wonder if in remote cases where power (and therefore cpu bandwidth), data storage, and network connectivity is at a premium if it would be more valuable to just be able to highlight moments of interest for lab analysis later? OR if you do you have cellular connection, you could download just those moments of interest and not hours and hours of footage? 

Am working on similar AI challenge at the moment. Hoping to translate my workflow to wolves in future if needed. 

We all are little overstretched but it there is no pressing deadlines, it should be possible to explore building efficient model for object detection and looking at suitable hardware for running these model on the edge. 

 

 

See full post
article

New paper - An integrated passive acoustic monitoring and deep learning pipeline for black-and-white ruffed lemurs in Ranomafana National Park, Madagascar

We demonstrate the power of using passive acoustic monitoring & machine learning to survey species, using ruffed lemurs in southeastern Madagascar as an example.

2 0
What an awesome paper! Loved learning about such a promising research tool in PAM combined with a CNNs, and that lemur vocalizations are termed as "roar-shrieks" :)
See full post