Soundscape conference @ Academia Sinica

We will have the first soundscape conference in ISGC 2019 on April 1st and 2nd. Speakers from Taiwan, Japan, US, Philippines, Malaysia, Vietnam will contribute. Please join us if you are interested in the soundscape and its potential applications in ecological monitorings.

Keynote speakers:
Dr. Bryan Pijanowski (University of Purdue)
Dr. Mao-Ning Tuanmu (Academia Sinica)

Date: Monday – Tuesday, 1 – 2 April 2019
Venue: 3F, Building for Humanities and Social Science, Academia Sinica

Besides, I will also give a talk regarding “Exploring ecosystem dynamics by hypotheses-driven soundscape information retrieval“.

Soundscape information retrieval represents the technique to extract meaningful information relevant to geophysical, biological, and anthropogenic activities from field recordings. Supervised source separation and audio recognition techniques have been widely employed in the past, but the performance depends on the quantity of training database and the complexity of testing data. To counter this issue, unsupervised learning approaches, such as blind source separation and clustering tools, have been recently introduced to analyze the dynamics of marine and terrestrial soundscapes. However, the performance of unsupervised learning relies on a proper hypothesis for the input data. Until now, it remains a challenge for ecological researchers to integrate proper hypotheses in soundscape information retrieval. In this presentation, we will demonstrate the integration of acoustic niche hypothesis, which predicts soniferous species will avoid acoustical competition by shifting acoustic niche in time or frequency domains, in the analysis of soundscape dynamics in the shallow waters off western Taiwan. In the future, more advanced techniques of source information retrieval are necessary to facilitate the soundscape-based ecosystem monitoring. The domain knowledge of ecological science and bioacoustics will be essential for the future development of soundscape information retrieval.


Two presentations at the 10th International Conference on Ecological Informatics!

Information retrieval from marine soundscape by using machine learning-based source separation

Tzu-Hao Lin 1, Tomonari Akamatsu 2, Yu Tsao 3, Katsunori Fujikura1

1 Department of Marine Biodiversity Research, Japan Agency for Marine-Earth Science and Technology, Japan
2 National Research Institute of Fisheries Science, Japan Fisheries Research and Education Agency, Japan
3 Research Center for Information Technology Innovation, Academia Sinica, Taiwan

In remote sensing of the marine ecosystem, visual information retrieval is limited by the low visibility in the ocean environment. Marine soundscape has been considered as an acoustic sensing platform of the marine ecosystem in recent years. By listening to environmental sounds, biological sounds, and human-made noises, it is possible to acoustically identify various geophysical events, soniferous marine animals, and anthropogenic activities. However, the sound detection and classification remain a challenging task due to the lack of underwater audio recognition database and the simultaneous interference of multiple sound sources. To facilitate the analysis of marine soundscape, we have employed information retrieval techniques based on non-negative matrix factorization (NMF) to separate different sound sources with unique spectral-temporal patterns in an unsupervised approach. NMF is a self-learning algorithm which decomposes an input matrix into a spectral feature matrix and a temporal encoding matrix. Therefore, we can stack two or more layers of NMF to learn the spectral-temporal modulation of k sound sources without any learning database [1]. In this presentation, we will demonstrate the application of NMF in the separation of simultaneous sound sources appeared on a long-term spectrogram. In shallow water soundscape, the relative change of fish chorus can be effectively quantified even in periods with strong mooring noise [2]. In deep-sea soundscape, cetacean vocalizations, an unknown biological chorus, environmental sounds, and systematic noises can be efficiently separated [3]. In addition, we can use the features learned in procedures of blind source separation as the prior information for supervised source separation. The self-adaptation mechanism during iterative learning can help search the similar sound source from other acoustic dataset contains unknown noise types. Our results suggest that the NMF-based source separation can facilitate the analysis of the soundscape variability and the establishment of audio recognition database. Therefore, it will be feasible to investigate the acoustic interactions among geophysical events, soniferous marine animals, and anthropogenic activities from long-duration underwater recordings.

Improving acoustic monitoring of biodiversity using deep learning-based source separation algorithms

Mao-Ning Tuanmu1, Tzu-Hao Lin2, Joe Chun-Chia Huang1, Yu Tsao3, Chia-Yun Lee1

1Biodiversity Research Center, Academia Sinica, Taiwan
2Department of Marine Biodiversity Research, Japan Agency for Marine-Earth Science and Technology, Japan
3Research Center for Information Technology Innovation, Academia Sinica, Taiwan

Passive acoustic monitoring of the environment has been suggested as an effective tool for investigating the dynamics of biodiversity across spatial and temporal scales. Recent development in automatic recorders has allowed environmental acoustic data to be collected in an unattended way for a long duration. However, one of the major challenges for acoustic monitoring is to identify sounds of target taxa in recordings which usually contain undesired signals from non-target sources. In addition, high variation in the characteristics of target sounds, co-occurrence of sounds from multiple target taxa, and a lack of reference data make it even more difficult to separate acoustic signals from different sources. To overcome this issue, we developed an unsupervised source separation algorithm based on a multi-layer (deep) non-negative matrix factorization (NMF). Using reference echolocation calls of 13 bat species, we evaluated the performance of the multi-layer NMF in separating species-specific calls. Results showed that the multi-layer NMF, especially when being pre-trained with reference calls, outperformed the conventional supervised single-layer NMF. We also evaluated the performance of the multi-layer NMF in identifying different types of bat calls in recordings collected in the field. We found comparable performance in call types identification between the multi-layer NMF and human observers. These results suggest that the proposed multi-layer NMF approach can be used to effectively separate acoustic signals of different taxa from long-duration field recordings in an unsupervised manner. The approach can thus improve the applicability of passive acoustic monitoring as a tool to investigate the responses of biodiversity to the changing environment.

Monitoring of coral reef ecosystem: an integrated approach of marine soundscape and machine learning

Presented in International Symposium on Grids & Clouds 2018

Monitoring of coral reef ecosystem: an integrated approach of marine soundscape and machine learning

 Tzu-Hao Lin1, Tomonari Akamatsu2, Frederic Sinniger3, Saki Harii3, Yu Tsao1

1Research Center for Information Technology Innovation, Academia Sinica, Taipei, Taiwan
2National Research Institute of Fisheries Science, Japan Fisheries Research and Education Agency, Yokohama, Japan
3Tropical Biosphere Research Center, University of the Ryukyus, Okinawa, Japan

Coral reefs represent the most biologically diverse marine ecosystem, however, they are vulnerable to environmental changes and impacts. Therefore, information on the variability of environment and biodiversity is essential for the conservation management of coral reefs. In this study, a soundscape monitoring network of shallow and mesophotic coral reefs was established in Okinawa, Japan. Three autonomous sound recorders were deployed in water depths of 1.5 m, 20 m, and 40 m since May 2017. To investigate the soundscape variability, we applied the periodicity-coded nonnegative matrix factorization to separate biological sounds and the other noise sources displayed on long-term spectrograms. The separation results indicate that the coral reef soundscape varied among different locations. At 1.5 m depth, biological sounds were dominated by snapping shrimp sounds and transient fish calls. Although not knowing the specific source, noises were clearly driven by tidal activities. At 20 m and 40 m depths, biological sounds were dominated by nighttime fish choruses and noises were primary related to shipping activities. Furthermore, the clustering result indicates the complexity of biological sounds was higher in mesophotic coral reefs compare to shallow-water coral reefs. Our study demonstrates that the integration of machine learning in the analysis of soundscape is efficient to interpret the variability of biological sounds, environmental and anthropogenic noises. Therefore, the conservation management of coral reefs, especially those rarely studied such as mesophotic coral reefs, can be facilitated by the long-term monitoring of coral reef soundscape.

You can also check the slides of this talk.

PNC 2017 Annual Conference and Joint Meetings

2017/11/7-9 @ Tainan, Taiwan

Computing biodiversity change via a soundscape monitoring network

Tzu-Hao Lin, Yu Tsao
Research Center for Information Technology Innovation, Academia Sinica

Yu-Huang Wang
Taiwan Academy of Ecology

Han-Wei Yen
Academia Sinica Grid Computing Centre

Sheng-Shan Lu
Taiwan Forestry Research Institute

A monitoring network for biodiversity change is essential for wildlife conservation. In recent years, many soundscape monitoring projects have been carried out to investigate the diversity of vocalizing animals. However, the acoustic-based biodiversity assessment remains challenging due to the lack of sufficient recognition database and the inability to disentangle mixed sound sources. Since 2014, an Asian Soundscape monitoring project has been initiated in Taiwan. So far, there are 15 recording sites in Taiwan and three sites in Southeast Asia, with more than 20,000 hours of recordings archived in the Asian Soundscape. In this study, we employed the visualization of long-duration recordings, blind source separation, and clustering techniques, to investigate the spatio-temporal variations of forest biodiversity in the Triangle Mountain, Lienhuachih, and Taipingshan. On the basis of blind source separation, biological sounds, with prominent diurnal occurrence pattern, can be separated from the environmental sounds without any recognition database. Thus, clusters of biological sounds can be effectively identified and employed to measure the daily change in bioacoustic diversity. Our results show that the bioacoustic diversity was higher in the evergreen broad-leaved forest. However, the seasonal variation in bioacoustic diversity was most evident in the high elevation coniferous forest. This study demonstrates that a suitable integration of machine learning and ecoacoustics can facilitate the evaluation of biodiversity changes. In addition to biological activities, we can also measure the environmental variability from soundscape information. In the future, the Asian Soundscape will not only serve as an open database for soundscape recordings, but also will provide tools for analyzing the interactions between biodiversity, environment, and human activities.

If you are interested in this research, please check the full paper published in PNC 2017.

5th Joint Meeting of the Acoustical Society of America and Acoustical Society of Japan

2016/11/28-12/2 @ Honolulu, USA

Acoustic response of Indo-Pacific humpback dolphins to the variability of marine soundscape

Tzu-Hao Lin, Yu Tsao
Research Center for Information Technology Innovation, Academia Sinica

Shih-Hau Fang
Department of Electrical Engineering, Yuan Ze University

Chih-Kai Yang, Lien-Siang Chou
Institute of Ecology and Evolutionary Biology, National Taiwan University

Marine mammals can adjust their vocal behaviors when they encounter anthropogenic noise. The acoustic divergence among different populations has also been considered as the effect of ambient noise. The recent studies discover that the marine soundscape is highly dynamic; however, it remains unclear how marine mammals alter their vocal behaviors under various acoustic environments. In this study, autonomous sound recorders were deployed in western Taiwan waters between 2012 and 2015. Soundscape scenes were unsupervised classified according to acoustic features measured in each 5 min interval. Non-negative matrix factorization was used to separate different scenes and to inverse the temporal occurrence of each soundscape scene. Echolocation clicks and whistles of Indo-Pacific humpback dolphins, which represent the only marine mammal species occurred in the study area, were automatically detected and analyzed. The preliminary result indicates the soundscape scenes dominated by biological sounds are correlated with the acoustic detection rate of humpback dolphins. Besides, the dolphin whistles are much complex when the prey associated scene is prominent in the local soundscape. In the future, the soundscape information may be used to predict the occurrence and habitat use of marine mammals.


2017/1/23-24 @ 高雄中山大學







近海與海岸環境 Land-Ocean Interactions in the Changing Coastal Zones of Taiwan:
Scientific Basis and Societal Engagements