discussion / AI for Conservation  / 26 January 2024

Tools for automating image augmentation 

Does anyone know of tools to automate image augmentation and manipulation. I wish to train ML image recognition models with images in which the target animal (and false targets) has been cropped out from originals and placed in other images with different backgrounds. If such tools could also simulate lighting conditions on the target ,based on the lighting in the new background, it would be super nice.

I am probably wishing for too much but if it is available, it would surely save me some time constructing these images manually. 

Lars Holst Hansen
Aarhus University
Biologist and Research Technician working with ecosystem monitoring and research at Zackenberg Research Station in Greenland
Conversation starter level 3
Popular level 3
Poster level 2
Reactor level 3
Involvement level 3

Seems like I should be able to make something usable in python using rembg

Sometimes a little googling before posting a question here could make wonders ... ;)

There might be some challenges & risks with that approach.  ML models should ideally be trained using the same sort of data they will ultimately run against; presumably you're not intending your model to analyse 'photoshopped' images, but real unaltered images?

The problem is that the 'photoshopping' - whether it's masking & multi-image composition or any other effect - can introduce image artefacts that, even if imperceptible to humans, can be detectable by an ML model.  The model can unwittingly become trained to detect and rely on those artefacts, hurting its performance in particularly inexplicable ways when it's then used on real, unadulterated data.

That's not to say it's impossible to train models this way - the use of synthesised data is not uncommon in the field - but it has downsides.

Tangentially, the same is true of things like watermarks and overlays inserted by trail cameras, like the time, weather, camera brand etc - those should all be removed before use with ML models.  Otherwise, you might think you're training a model to accurately detect wolves vs coyotes but what you're actually training is a model which thinks canines captured on Brownings are wolves and on Reconyx's are coyotes, or somesuch coincidental correlation that happens to be in your training data.

So we are using the default parameters in Yolov8 for data augmentation. It supports the basic flip, distort, rotate and adding noise. 

It would be interesting to use yolov8 segmentation and extract the animal. Let me share the code snippet to extract the bears from images and background is black.   

# %% [markdown]
# # Animal Face Feature Extraction 

# %%
from pathlib import Path

import cv2 as cv
import numpy as np

from ultralytics import YOLO

# %%
# Variables 
source = "https://upload.wikimedia.org/wikipedia/commons/e/e3/Grizzly_Bear_%28Ursus_arctos_ssp.%29.jpg"
model = "yolov8n-seg.pt" # see other yolov8 models  
cococlass = 21 # Refer coco class names

# %%
# Load a yolov8 model 
model = YOLO(model) 

# %%
# Predict with the model
results = model(source=source, save=True)

# %%
# Source: Yolov8 Documentation
# https://docs.ultralytics.com/guides/isolating-segmentation-objects/
#  Iterate detection results (helpful for multiple images)
for r in results:
    img = np.copy(r.orig_img)
    img_name = Path(r.path).stem # source image base-name

    # Iterate each object contour (multiple detections)
    for ci,c in enumerate(r):
        #  Get detection class name
        label = c.names[c.boxes.cls.tolist().pop()]
        b_mask = np.zeros(img.shape[:2], np.uint8)

        # Create contour mask 
        contour = c.masks.xy.pop().astype(np.int32).reshape(-1, 1, 2)
        _ = cv.drawContours(b_mask, [contour], -1, (255, 255, 255), cv.FILLED)

        # Isolate object with black background
        mask3ch = cv.cvtColor(b_mask, cv.COLOR_GRAY2BGR)
        isolated = cv.bitwise_and(mask3ch, img)

        #  Bounding box coordinates
        x1, y1, x2, y2 = c.boxes.xyxy.cpu().numpy().squeeze().astype(np.int32)

        # Crop image to object region
        iso_crop = isolated[y1:y2, x1:x2]

        # Save isolated object to file
        _ = cv.imwrite(f'{img_name}_{label}-{ci}.png', iso_crop)

# %%