Dr. Eduardo Silva Alvarado, researcher at the European University of the Atlantic (UNEATLANTICO), is participating in a study that proposes a new methodology for the classification of bird species in order to preserve biodiversity in East Africa.
The classification of bird species is very important in the field of ornithology, as it plays a fundamental role in the assessment and monitoring of environmental dynamics, including habitat modifications, migratory behaviour, pollution levels and disease occurrence. Traditional methods of classification, such as visual identification, are slow and require a high level of expertise. However, audio-based classification of bird species is a promising approach that can be used to automate bird species identification.
Birds are excellent indicators of environmental quality and their accurate classification can help to understand population trends, migration patterns and ecosystem health. It is also essential for developing efficient conservation plans to prevent the extinction of endangered species. On the other hand, the loss of biodiversity, caused by human activities, has led to a global crisis that demands the monitoring and conservation of bird species. East Africa is home to numerous bird species that play a vital role in the ecosystem and in the region in cultural and economic terms. However, monitoring and conserving them is a challenge due to the vast region, rugged terrain and complex vocalisations.
In this context, the study in which Dr Silva has participated aimed to establish an audio-based species classification system for 264 East African bird species based on their vocalisations, using modified transferred deep learning. To do this, wireless acoustic sensor networks combined with deep learning techniques were developed. These networks use audio recordings of the birds’ vocalisations and transform the audio data into mel spectrogram images. These images represent the frequency patterns effectively and allow the classification of bird species.
Three different recurrent neural network (RNN) models were tested for bird species classification. The initial model, EfficientNet-B7, showed a moderate accuracy of 81.82%, correctly identified bird species in approximately 81.82% of cases; however, it did not reach adequate levels in terms of recall, F1 score and average accuracy. However, combining this model with the Long short-tem memory model (LSMT) significantly improved its overall performance. This combination improved significantly in accuracy by 83.67%, recall, F1 score, average macro accuracy and Cohen’s kappa score. These improvements helped capture temporal dependencies in the audio data, resulting in better classification results.
Similarly, the addition of the Gated Recurrent Unit (GRU) model to the EfficientNet-B7 model improved the overall performance of the model, achieving an accuracy of 84.03%, slightly outperforming the LSTM model. It also showed comparable improvements in recall, F1 score, macro average accuracy and Cohen’s kappa score.
This research has significant implications for ornithology and environmental monitoring. By automating bird species identification, researchers can collect data more efficiently and accurately, enabling a deeper understanding of bird populations and their interactions with the environment. In addition, this approach can contribute to the development of conservation strategies and the assessment of environmental impacts on bird species.
In conclusion, the combination of transferred deep learning, mel spectrogram imaging and RNN offers a promising solution for bird species classification. By harnessing the power of artificial intelligence and machine learning, we can improve the understanding of avian biodiversity and contribute to the conservation of bird species worldwide.
To learn more about this study, click here.
To read more research, consult the UNEATLANTICO repository.