Great to know you are in the domain. To be honest my analysis so far indicates that when conducting a DSP approach on the spectrum, smoothing via convolution becomes an issue? Basically, the raw spectrum is too jagged to match, so one convolves it to smooth it, but then one just gets a generic "noise"-shaped spectrum. I also have variances in sampled spectra from the same source recording? I am using an fs=44100 and a spectrum 0 - 64kHz initially, or though I tried to filter from 100 - 9k with little success?
My design outline is: I need to identify the presence of a flock of a certain species of avians, I need to know when the flock is not present, and I need to distinguish the presence of other flocks of birds, not to identify them, but they are sometimes similar in size and possibly, therefore, call range? A sort of "We - Not We" approach?
I am comparing the gestalt sound, not individual calls?
Plus: I am using a Rapsberry Pi for the Fog Node currently, but see that I can use my Arduino Uno for TinyML from the examples which use a Nano? I am interested in the power-saving, but need a robust microphone rig, which I currently get via usb?
I will checkout your tutorial, many thanks!