Mostrar el registro sencillo del ítem

dc.contributorUniversitat Ramon Llull. La Salle
dc.contributorNaturalis Biodiversity Center
dc.contributorTilburg University
dc.contributorQueen Mary Univesity of London
dc.contributorCentre National de la Recherche Scientifique (CNRS)
dc.contributorNantes Universite
dc.contributorUniversity of Konstanz
dc.contributorMax Planck Institute of Animal Behavior
dc.contributorLandesbund fur Vogel- und Naturschutz
dc.contributorNaturkundemuseum Bayern/BIOTOPIA Lab
dc.contributorAGH University of Science and Technology
dc.contributorUniversity of Salford
dc.contributorUniversity of Surrey
dc.contributorUniversity of Oxford
dc.contributorAarhus University
dc.contributorSyracuse University
dc.contributorWoods Hole Oceanographic Institution
dc.contributor.authorNolasco, Ines
dc.contributor.authorSingh, Shubhr
dc.contributor.authorMorfi, Veronica
dc.contributor.authorLostanlen, Vincent
dc.contributor.authorStrandburg-Peshkin, Ariana
dc.contributor.authorVidaña Vila, Ester
dc.contributor.authorGil, Lisa
dc.contributor.authorPamula, Hanna
dc.contributor.authorWhitehead, Helen
dc.contributor.authorKiskin, Ivan
dc.contributor.authorJensen, Frants
dc.contributor.authorMorford, Joe
dc.contributor.authorEmmerson, Michael G.
dc.contributor.authorVersace, Elisabetta
dc.contributor.authorGrout, Emily
dc.contributor.authorLiu, Haohe
dc.contributor.authorGhani, Burooj
dc.contributor.authorStowell, Dan
dc.date.accessioned2025-07-07T18:32:48Z
dc.date.available2025-07-07T18:32:48Z
dc.date.created2023-05
dc.date.issued2023-11
dc.identifier.issn1574-9541ca
dc.identifier.urihttp://hdl.handle.net/20.500.14342/5358
dc.description.abstractAutomatic detection and classification of animal sounds has many applications in biodiversity monitoring and animal behavior. In the past twenty years, the volume of digitised wildlife sound available has assively increased, and automatic classification through deep learning now shows strong results. However, bioacoustics is not a single task but a vast range of small-scale tasks (such as individual ID, call type, emotional indication) with wide variety in data characteristics, and most bioacoustic tasks do not come with strongly-labelled training data. The standard paradigm of supervised learning, focussed on a single large-scale dataset and/or a generic pre-trained algorithm, is insufficient. In this work we recast bioacoustic sound event detection within the AI framework of few-shot learning. We adapt this framework to sound event detection, such that a system can be given the annotated start/end times of as few as 5 events, and can then detect events in long-duration audio—even when the sound category was not known at the time of algorithm training. We introduce a collection of open datasets designed to strongly test a system's ability to perform few-shot sound event detections, and we present the results of a public contest to address the task. Our analysis shows that prototypical networks are a very common used strategy and they perform well when enhanced with adaptations for general characteristics of animal sounds. However, systems with high time resolution capabilities perform the best in this challenge. We demonstrate that widely-varying sound event durations are an important factor in performance, as well as non-stationarity, i.e. gradual changes in conditions throughout the duration of a recording. For fine-grained bioacoustic recognition tasks without massive annotated training data, our analysis demonstrate that few-shot sound event detection is a powerful new method, strongly outperforming traditional signal-processing detection methods in the fully automated scenarioca
dc.format.extent18 p.ca
dc.language.isoengca
dc.publisherElsevierca
dc.relation.ispartofEcological Informaticsca
dc.rightsAttribution 4.0 Internationalca
dc.rights© L'autor/aca
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/*
dc.subject.otherBioacousticsca
dc.subject.otherDeep Learningca
dc.subject.otherEvent Detectionca
dc.subject.otherFew-shot Learningca
dc.titleLearning to detect an animal sound from five examplesca
dc.typeinfo:eu-repo/semantics/articleca
dc.rights.accessLevelinfo:eu-repo/semantics/openAccess
dc.embargo.termscapca
dc.subject.udc004ca
dc.subject.udc57ca
dc.identifier.doihttps://doi.org/10.1016/j.ecoinf.2023.102258ca
dc.description.versioninfo:eu-repo/semantics/publishedVersionca


Ficheros en el ítem

 

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem

Attribution 4.0 International
Excepto si se señala otra cosa, la licencia del ítem se describe como http://creativecommons.org/licenses/by/4.0/
Compartir en TwitterCompartir en LinkedinCompartir en FacebookCompartir en TelegramCompartir en WhatsappImprimir