A two-stage approach to automatically detect and classify woodpecker (Fam. Picidae) sounds.
View/Open
Publication date
2020-09ISSN
1872-910X
Abstract
Inventorying and monitoring which bird species inhabit a specific area give rich and reliable information regarding its conservation status and other meaningful biological parameters. Typically, this surveying process is carried out manually by ornithologists and birdwatchers who spend long periods of time in the areas of interest trying to identify which species occur. Such methodology is based on the experts’ own knowledge, experience, visualization and hearing skills, which results in an expensive, subjective and error prone process. The purpose of this paper is to present a computing friendly system able to automatically detect and classify woodpecker acoustic signals from a real-world environment. More specifically, the roposed architecture features a two-stage Learning Classifier System that uses (1) Mel Frequency Cepstral Coefficients and Zero Crossing Rate to detect bird sounds over environmental noise, and (2) Linear Predictive Cepstral Coefficients, Perceptual Linear Predictive Coefficients and Mel Frequency Cepstral Coefficients to identify the bird species and sound type (i.e., vocal sounds such as advertising calls, excitement calls, call notes and drumming events) associated to that bird sound. Conducted experiments over a data set of the known woodpeckers species belonging to the Picidae family that
live in the Iberian peninsula have resulted in an overall accuracy of 94,02%, which endorses the feasibility of this proposal and encourage practitioners to work toward this direction.
Document Type
Article
Document version
Accepted version
Language
English
Subject (CDU)
004 - Computer science and technology. Computing. Data processing
57 - Biological sciences in general
68 - Industries, crafts and trades for finished or assembled articles
Pages
35 p.
Publisher
Elsevier
Is part of
Applied Acoustics: Vol. 166, Set. 2020
This item appears in the following Collection(s)
Rights
© Elsevier