Mostra el registre parcial de l'element

dc.contributorUniversitat Ramon Llull. La Salle
dc.contributorUniversity of Milano Bicocca
dc.contributor.authorVidaña Vila, Ester
dc.contributor.authorBrambilla, Giovanni
dc.contributor.authorAlsina-Pagès, Rosa Ma
dc.date.accessioned2025-10-03T05:58:08Z
dc.date.available2025-10-03T05:58:08Z
dc.date.created2024-07-24
dc.date.issued2025-04-17
dc.identifier.issn2084-879Xca
dc.identifier.urihttp://hdl.handle.net/20.500.14342/5560
dc.description.abstractUrban environments are characterized by a complex interplay of various sound sources, which ignificantly influence the overall soundscape quality. This study presents a methodology that combines the intermittency ratio (IR) metric for acoustic event detection with deep learning (DL) techniques for the classification of sound sources associated with these events. The aim is to provide an automated tool for detecting and categorizing polyphonic acoustic events, thereby enhancing our ability to assess and manage environmental noise. Using a dataset collected in the city center of Barcelona, our results demonstrate the effectiveness of the IR metric in successfully detecting events from diverse categories. Specifically, the IR captures the temporal variations of sound pressure levels due to significant noise events, enabling their detection but not providing information on the associated sound sources. To fill this weakness, the DL-based classification system, which uses a MobileNet convolutional neural network, shows promise in identifying foreground sound sources. Our findings highlight the potential of DL techniques to automate the classification of sound sources, providing valuable insights into the acoustic environment. The proposed methodology of combining the two above techniques represents a step forward in automating acoustic event detection and classification in urban soundscapes and providing important information to manage noise mitigation actions.ca
dc.format.extent15 p.ca
dc.language.isoengca
dc.publisherDe Gruyterca
dc.relation.ispartofNoise Mapping, 2015, 12, 20240014ca
dc.rights© L'autor/aca
dc.rightsAttribution 4.0 International*
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/*
dc.subject.otherIntermittency ratioca
dc.subject.otherSound event detectionca
dc.subject.otherConvolutional neural networkca
dc.subject.otherSound source identificationca
dc.subject.otherUrban noiseca
dc.titleSound event detection by intermittency ratio criterium and source classification by deep learning techniquesca
dc.typeinfo:eu-repo/semantics/articleca
dc.rights.accessLevelinfo:eu-repo/semantics/openAccess
dc.embargo.termscapca
dc.subject.udc004ca
dc.subject.udc531/534ca
dc.subject.udc62ca
dc.identifier.doihttps://doi.org/10.1515/noise-2024-0014ca
dc.description.versioninfo:eu-repo/semantics/publishedVersionca


Fitxers en aquest element

 

Aquest element apareix en la col·lecció o col·leccions següent(s)

Mostra el registre parcial de l'element

© L'autor/a
Excepte que s'indiqui una altra cosa, la llicència de l'ítem es descriu com http://creativecommons.org/licenses/by/4.0/
Comparteix a TwitterComparteix a LinkedinComparteix a FacebookComparteix a TelegramComparteix a WhatsappImprimeix