New article online: Improving biodiversity assessment via unsupervised separation of biological sounds from long-duration recordings

Our new article has been published on Scientific Reports! In this article, we introduce a novel machine learning tool, the periodicity coded nonnegative matrix factorization (PC-NMF). The PC-NMF can separate biological sounds from a noisy long-term spectrogram in an unsupervised approach, therefore, it is a great tool for evaluating the dynamics of soundscape and facilitating the soundscape-based biodiversity assessment.

You can download the MATLAB codes of PC-NMF and test data in the supplementary dataset of our article.

Improving biodiversity assessment via unsupervised separation of biological sounds from long-duration recordings

Scientific Reports 7, 4547 (2017) doi:10.1038/s41598-017-04790-7

Tzu-Hao Lin, Yu Tsao
Research Center for Information Technology Innovation, Academia Sinica, Taipei, Taiwan (R.O.C.)

Shih-Hua Fang
Department of Electrical Engineering, Yuan Ze University, Taoyuan, Taiwan (R.O.C.)

Investigating the dynamics of biodiversity via passive acoustic monitoring is a challenging task, owing to the difficulty of identifying different animal vocalizations. Several indices have been proposed to measure acoustic complexity and to predict biodiversity. Although these indices perform well under low-noise conditions, they may be biased when environmental and anthropogenic noises are involved. In this paper, we propose a periodicity coded non-negative matrix factorization (PC-NMF) for separating different sound sources from a spectrogram of long-term recordings. The PC-NMF first decomposes a spectrogram into two matrices: spectral basis matrix and encoding matrix. Next, on the basis of the periodicity of the encoding information, the spectral bases belonging to the same source are grouped together. Finally, distinct sources are reconstructed on the basis of the cluster of the basis matrix and the corresponding encoding information, and the noise components are then removed to facilitate more accurate monitoring of biological sounds. Our results show that the PC-NMF precisely enhances biological choruses, effectively suppressing environmental and anthropogenic noises in marine and terrestrial recordings without a need for training data. The results may improve behaviour assessment of calling animals and facilitate the investigation of the interactions between different sound sources within an ecosystem.

 

聆聽大自然四季之音

「聆聽,森林的生命與故事」—系列講座

透過自動錄音機一年365天,從早到晚,每30分鐘錄下5分鐘的檔案,讓我們有機會透過聲音更認識自然谷的風吹草動、燕語鶯啼、龍吟虎嘯。藉著錄音機,錄下土地四季的聲音波動,更進一步運用聲波圖看到四季天氣變化。

講座時間:6月22日(四)晚上0700-0930

講座地點:清華大學圖書館 1樓清華沙龍(新竹市光復路二段101號)

講座報名:點此報名講座

對此講座有興趣的朋友,也歡迎閱讀 用「聲物」識生物 以自動錄音聆聽自然谷之聲 這篇文章,在文章裡面,我們簡單的分享了如何應用自動錄音機探索自然谷的豐富生物多樣性,也可以到我們所製作的互動式網頁中,一探自然谷的各種聲物!

2017年海洋科學年會

2017/5/4-5 @ 國立中山大學

聆聽海洋的訊息:應用深度學習分析海洋聲景之變動

林子皓、曹昱
中央研究院 資訊科技創新研究中心

被動式聲學監測已被廣泛應用在海洋環境與生態研究中,長期錄音中的各種環境音與動物音增加了我們對海洋生態環境的了解,許多研究也深入探討人為噪音對海洋生態的影響。然而,過去針對海洋聲景的分析大多著重噪音的時頻譜特性,並透過設定規則的偵測器尋找海洋動物的聲音。但海洋聲景受到地形、氣候、生物群聚與人為活動的高度影響,時頻譜分析可能無法有效描述同時出現的多種聲源,偵測器效能也隨著噪音變動而改變。為了有效分離海洋聲景中的各種聲源,本研究應用非負矩陣分解法 (non-negative matrix factorization) 及其變形方法分析長期時頻譜圖,將輸入資料拆解為特徵矩陣與編碼矩陣。雖然單層的非負矩陣分解法在多次疊代後,能夠在特徵矩陣與編碼矩陣約略學習到各種聲源的頻譜特徵與時域上的強度,但仍難以分離重疊的多種聲源。本研究將多層學習器分別預訓練後堆疊成深度學習架構,並在各層之間逐漸減少特徵矩陣之基底數量,藉由最末層回傳後之重建資料和輸入資料的誤差,在多次疊代中自行修正各層模型參數以達到最佳的聲源分離成果。本研究針對各地具有不同環境噪音特性的海洋聲景進行分析,結果顯示在不需要辨識樣本與資料標籤的情況下,深度學習能夠有效分離海洋中的各種主要聲源:魚群鳴唱、槍蝦脈衝聲、船隻噪音與環境音。學習到的特徵矩陣也能夠作為辨識樣本,透過半監督式學習分析大量的線上資料。透過深度學習分離聲源,未來將能夠更有效評估海洋聲景的複雜結構,並藉此探討海洋環境與生態的變動,以及人為開發的影響。

International Symposium on Grids & Clouds 2017

2017/3/5-10 @ Academia Sinica, Taipei, Taiwan

Listening to the ecosystem: the integration of machine learning and a long-term soundscape monitoring network

Tzu-Hao Lin, Yu Tsao
Research Center for Information Technology Innovation, Academia Sinica

Yu-Huang Wang
Taiwan Biodiversity Information Facility, Biodiversity Research Center, Academia Sinica

Han-Wei Yen
Academia Sinica Grid Computing

Information on the variability of environment and biodiversity is essential for conservation management. In recent years, soundscape monitoring has been proposed as a new approach to assess the dynamics of biodiversity. Soundscape is the collection of biological sound, environmental sound, and anthropogenic noise, which provide us the essential information regarding the nature environment, behavior of calling animals, and human activities. The recent developments of recording networks facilitate the field surveys in remote forests and deep marine environments. However, analysis of big acoustic data is still a challenging task due to the lack of sufficient database to recognize various animal vocalizations. Therefore, we have developed three tools for analyzing and visualizing soundscape data: (1) long-term spectrogram viewer, (2) biological chorus detector, (3) soundscape event classifier. The long-term spectrogram viewer helps users to visualize weeks or months of recordings and evaluate the dynamics of soundscape. The biological chorus detector can automatically recognize the biological chorus without any sound template. We can separate the biological chorus and non-biological noise from a long-term spectrogram and unsupervised identify various biological events by using the soundscape event classifier. We have applied these tools on terrestrial and marine recordings collected in Taiwan to investigate the variability of environment and biodiversity. In the future, we will integrate these tools with the Asian Soundscape monitoring network. Through the open data of soundscape, we hope to provide ecological researcher and citizens an interactive platform to study the dynamics of ecosystem and the interactions among acoustic environment, biodiversity, and human activities.

5th Joint Meeting of the Acoustical Society of America and Acoustical Society of Japan

2016/11/28-12/2 @ Honolulu, USA

Acoustic response of Indo-Pacific humpback dolphins to the variability of marine soundscape

Tzu-Hao Lin, Yu Tsao
Research Center for Information Technology Innovation, Academia Sinica

Shih-Hau Fang
Department of Electrical Engineering, Yuan Ze University

Chih-Kai Yang, Lien-Siang Chou
Institute of Ecology and Evolutionary Biology, National Taiwan University

Marine mammals can adjust their vocal behaviors when they encounter anthropogenic noise. The acoustic divergence among different populations has also been considered as the effect of ambient noise. The recent studies discover that the marine soundscape is highly dynamic; however, it remains unclear how marine mammals alter their vocal behaviors under various acoustic environments. In this study, autonomous sound recorders were deployed in western Taiwan waters between 2012 and 2015. Soundscape scenes were unsupervised classified according to acoustic features measured in each 5 min interval. Non-negative matrix factorization was used to separate different scenes and to inverse the temporal occurrence of each soundscape scene. Echolocation clicks and whistles of Indo-Pacific humpback dolphins, which represent the only marine mammal species occurred in the study area, were automatically detected and analyzed. The preliminary result indicates the soundscape scenes dominated by biological sounds are correlated with the acoustic detection rate of humpback dolphins. Besides, the dolphin whistles are much complex when the prey associated scene is prominent in the local soundscape. In the future, the soundscape information may be used to predict the occurrence and habitat use of marine mammals.

2017年動物行為生態研討會

2017/1/23-24 @ 高雄中山大學

應用機器學習探討海洋聲景變動與中華白海豚發聲活動之關聯

林子皓、曹昱
中央研究院資訊科技創新研究中心

方士豪
元智大學電機工程學系

鯨豚的發聲行為相當多變,不同族群可能會在各種環境音改變哨聲特徵﹐也會在遭遇人為噪音時改變聲音結構。海洋聲景是由環境音、動物音與人為噪音組成,具有高度變異的特性。儘管過去已有不少針對鯨豚發聲與單一音源的研究,但是對鯨豚如何在多變的海洋聲景且多重聲源相互重疊的狀況下改變行為仍不清楚。本研究透過水下錄音機,長期收錄2014年苗栗海域的海洋錄音。首先應用自動偵測器尋找中華白海豚水下聲音,再應用非負矩陣分解法學習海洋聲景中的主要聲源特徵。透過非監督式學習器,可以有效拆解長期時頻譜圖,視覺化呈現石首魚鳴唱、槍蝦聲音、環境與人為噪音等主要聲源的相對變化。利用廣義疊加模型分析聲景與白海豚聲音後,我們發現白海豚的聲音偵測率與複雜度和各種聲源皆有不同的相關性。此結果顯示應用機器學習分離聲景中的各種聲源之後,將能夠有效瞭解動物和各種聲源的交互作用。未來,聲景中的各種訊息也可以作為預測動物活動的生態遙測資料。

Ecoacoustics 2016

2016/6/5-8 @ University of Michigan

Investigation on the dynamics of soundscape by using unsupervised detection and classification algorithms

Tzu-Hao Lin, Lien-Siang Chou
Institute of Ecology and Evolutionary Biology, National Taiwan University, Repubic of China (Taiwan)

Yu-Huang Wang
Biodiversity Research Center, Academia Sinica, Repubic of China (Taiwan)

Soundscape has been proposed as a potential information source to study the variability of biodiversity. However, analysis of the soundscape is a challenging task when there is no sufficient database to recognize various sounds collected from long duration recordings. Previous researches have measured several acoustic diversity indexes to quantify the variation of biodiversity, but the acoustic diversity indexes are still difficult to interpret without any ground truth. In this study, we propose to analyze the composition of soundscape scenes and visualize the dynamics of soundscape by using unsupervised detection and classification algorithms. Different soundscape scenes were classified according to the tonal sounds, pulsed sounds, and acoustic features obtained from long-term spectrogram. By adjusting the variation explained through classification results, the number of soundscape scenes will be automatically determined. The unsupervised classifier has been employed to analyze the soundscape dynamics in several forests and shallow marine environments in Taiwan. Our results demonstrate that the seasonal and diurnal changing patterns of geophony, biophony, and anthrophony can be effectively investigated. Besides, the spatial change of soundscape can also be discriminated according to the composition of soundscape scenes. After the biophony scenes have been identified, we can apply the same classifier again to measure the complexity of biological sounds and examine the variability of biodiversity. The current approach provides researchers and managers a visualization platform to monitor the dynamics of soundscape and to study the interactions among acoustic environment, biodiversity, and human activities in the future.