Skip to main page content

Exploring Marine Soundscapes in the Anthropocene: machine learning applications for the analysis of large passive acoustic datasets

My Session Status

When:
10:30, Thursday 11 May 2023 EDT (30 minutes)
Where:

Soundscapes – which can be defined as the spatial, temporal, frequency attributes, and type of sources contributing to ambient sound – are a proxy for changing at a rapid pace in the Anthropocene. With increasing anthropogenic intrusion in ocean ecosystems, noise-generating activities, both by themselves and in combination with other stressors, can have negative impacts on marine life (Duarte et al., 2021). Human-induced changes to ocean soundscapes have been shown to have negative effects on marine invertebrates, fish, and mammals (Kunc & Schmidt, 2019). This highlights an urgent need for gathering evidence in regard to the impacts of anthropogenic noise on marine life. Passive acoustic monitoring (PAM) is becoming one of the most used tools for documenting marine biodiversity, monitoring species presence and their seasonal movements, and for characterizing anthropic acoustic contributions and their impacts to marine environments. The use of PAM for answering ecological questions has several advantages: multiple species can be studied simultaneously and at multiple temporal scales ( hourly, daily, monthly, yearly trends);) it allows monitoring otherwise hardly accessible environments (e.g., ocean floor); and it allows to explore the relationship between organisms and biogeochemical process, among others (Ross et al., 2023). However, detecting and classifying marine sound sources (e.g., whales and dolphins, fish, ships, seismic shots), and understanding how they relate to ecological processes is still a current an open challenge for researchers working with marine PAM datasets.

This study presents a machine learning (ML) approach for the characterization of marine sound sources and the investigation of ecological relationships using large PAM datasets. In this study, I applied the methods described by Sethi et al. (2020) to a dataset of marine PAM recordings collected in Placentia Bay (Newfoundland) during the summer of 2019. The audio was processed using a pre-trained general acoustic classification model, VGGish1, to convert the audio files in vectors of 128 learned acoustic features. The features were then reduced from 128 to two dimensions applying a dimensionality reduction technique (UMAP), and plotted in relation to a set of environmental variables (wave height, current direction & speed, surface temperature), the presence of humpback whale vocalizations, and anthropogenic noise levels.

The results show how acoustic feature extraction, visualization, and analysis can help us unpack the environmental information contained in PAM datasets at multiple scales and for multiple purposes, ranging from the prediction of environmental conditions to the investigation of marine mammals’ behavior.

My Session Status

Send Feedback

Session detail
Allows attendees to send short textual feedback to the organizer for a session. This is only sent to the organizer and not the speakers.
To respect data privacy rules, this option only displays profiles of attendees who have chosen to share their profile information publicly.

Changes here will affect all session detail pages