ASA Victoria presentations are now available online!

Two presentations were given during the ASA Victoria. Now you can access the slides and recordings by ASA website:

Machine Learning and Data Science Approaches in Ocean Acoustics II:
Applying machine-learning based source separation techniques in the analysis of marine soundscapes

Hot Topics in Acoustics:
Information retrieval from a soundscape by using blind source separation and clustering

Feel free to contact me if you have any question.

Advertisements

New article online: Robust S1 and S2 heart sound recognition based on spectral restoration and multi-style training

Robust S1 and S2 heart sound recognition based on spectral restoration and multi-style training

Biomedical Signal Processing and Control, 49: 173-180 (2019)
https://www.sciencedirect.com/science/article/pii/S1746809418302787?via%3Dihub

Yu Tsao, Tzu-Hao Lin
Research Center for Information Technology Innovation (CITI) at Academia Sinica, Taipei, Taiwan

Fei Chen
Department of Electrical and Electronic Engineering, Southern University of Science and Technology, Xueyuan Road 1088#, Xili, Nanshan District, Shenzhen, China

Yun-Fan Chang, Chui-Hsuan Cheng, Kun-Hsi Tsai
iMediPlus Inc., Hsinchu, Taiwan

Recently, we have proposed a deep learning based heart sound recognition framework, which can provide high recognition performance under clean testing conditions. However, the recognition performance can notably degrade when noise is present in the recording environments. This study investigates a spectral restoration algorithm to reduce noise components from heart sound signals to achieve robust S1 and S2 recognition in real-world scenarios. In addition to the spectral restoration algorithm, a multi-style training strategy is adopted to train a robust acoustic model, by incorporating acoustic observations from both original and restored heart sound signals. We term the proposed method as SRMT (spectral restoration and multi-style training). The experimental procedure in this study is described as follows: First, an electronic stethoscope was used to record actual heart sounds, and the noisy signals were artificially generated at different signal-to-noise-ratios (SNRs). Second, an acoustic model based on deep neural networks (DNNs) was trained using original heart sounds and heart sounds processed through spectral restoration. Third, the performance of the trained model was evaluated using the following metrics: accuracy, precision, recall, specificity, and F-measure. The results confirm the effectiveness of the proposed method for recognizing heart sounds in noisy environments. The recognition results of an acoustic model trained on SRMT outperform that trained on clean data with a 2.36% average accuracy improvement (from 85.44% and 87.80%), over clean, 20dB, 15dB, 10dB, 5dB, and 0dB SNR conditions; the improvements are more notable in low SNR conditions: the average accuracy improvement is 3.87% (from 82.83% to 86.70%) in the 0dB SNR condition.

Before February 03, 2019, you can download the pdf copy from this link.

Toolbox online: Soundscape_Viewer

Last year, we published the periodicity-coded nonnegative matrix factorization (PC-NMF). It has been demonstrated to work in various ecosystems. Recently, we have integrated the PC-NMF and k-means clustering in the toolbox of soundscape information retrieval. On the basis of this toolbox, one can explore the variability of soundscape and recognize different audio events without a recognition database.

Please  go to the following link to download the codes of Soundscape_Viewer. If you register an account on CodeOcean, you will be able to upload your long-term spectrograms and run the analysis on the cloud. If you have a MATLAB license, you can also execute Soundscape_viewer.m to initiate a graphical user interface.

https://codeocean.com/2018/11/16/demonstration-of-soundscape-separation-by-using-the-soundscape-viewer/code

New article online: The Effects of Continuous Acoustic Stress on ROS Levels and Antioxidant-related Gene Expression in the Black Porgy

 

The Effects of Continuous Acoustic Stress on ROS Levels and Antioxidant-related Gene Expression in the Black Porgy (Acanthopagrus schlegelii)

Zoological Studies 57: 59 (2018)
http://zoolstud.sinica.edu.tw/Journals/57/57-59.html

Hao-Yi Chang, Yi Ta Shao
Institute of Marine Biology, National Taiwan Ocean University

Tzu-Hao Lin
Department of Marine Biodiversity Research, Japan Agency for Marine-Earth Science and Technology

Kazuhiko Anraku
Fisheries Department, Kagoshima University

Short-term exposure to strong underwater noise is known to seriously impact fish. However, the chronic physiological effects of continuous exposure to weak noise, i.e. the operation noise from offshore wind farms (OWF), remain unclear. Since more and more OWF will be built in the near future, their operation noise is an emerging ecological issue. To investigate the long-term physiological effects of such underwater noise on fish, black porgies (Acanthopagrus schlegelii) were exposed to two types of simulated wind farm noise—quiet (QC: 109 dB re 1 μPa / 125.4 Hz; approx. 100 m away from the wind turbine) and noisy (NC: 138 dB re 1 μPa / 125.4 Hz; near the turbine)—for up to 2 weeks. Measurement of auditory-evoked potentials showed that black porgies can hear sound stimuli under both NC and QC scenarios. Although no significant difference was found in plasma cortisol levels, the fish under NC conditions exhibited higher plasma reactive oxygen species (ROS) levels than the control group at week 2. Moreover, alterations were found in mRNA levels of hepatic antioxidant-related genes (sod1, cat and gpx), with cat downregulated and gpxupregulated after one week of QC exposure. Our results suggest that the black porgy may adapt to QC levels of noise by modulating the antioxidant system to keep ROS levels low. However, such antioxidant response was not observed under NC conditions; instead, ROS accumulated to measurably higher levels. This study suggests that continuous OWF operation noise represents a potential stressor to fish. Furthermore, this is the first study to demonstrate that chronic exposure to noise could induce ROS accumulation in fish plasma.

Two presentations at the 10th International Conference on Ecological Informatics!

Information retrieval from marine soundscape by using machine learning-based source separation

Tzu-Hao Lin 1, Tomonari Akamatsu 2, Yu Tsao 3, Katsunori Fujikura1

1 Department of Marine Biodiversity Research, Japan Agency for Marine-Earth Science and Technology, Japan
2 National Research Institute of Fisheries Science, Japan Fisheries Research and Education Agency, Japan
3 Research Center for Information Technology Innovation, Academia Sinica, Taiwan

In remote sensing of the marine ecosystem, visual information retrieval is limited by the low visibility in the ocean environment. Marine soundscape has been considered as an acoustic sensing platform of the marine ecosystem in recent years. By listening to environmental sounds, biological sounds, and human-made noises, it is possible to acoustically identify various geophysical events, soniferous marine animals, and anthropogenic activities. However, the sound detection and classification remain a challenging task due to the lack of underwater audio recognition database and the simultaneous interference of multiple sound sources. To facilitate the analysis of marine soundscape, we have employed information retrieval techniques based on non-negative matrix factorization (NMF) to separate different sound sources with unique spectral-temporal patterns in an unsupervised approach. NMF is a self-learning algorithm which decomposes an input matrix into a spectral feature matrix and a temporal encoding matrix. Therefore, we can stack two or more layers of NMF to learn the spectral-temporal modulation of k sound sources without any learning database [1]. In this presentation, we will demonstrate the application of NMF in the separation of simultaneous sound sources appeared on a long-term spectrogram. In shallow water soundscape, the relative change of fish chorus can be effectively quantified even in periods with strong mooring noise [2]. In deep-sea soundscape, cetacean vocalizations, an unknown biological chorus, environmental sounds, and systematic noises can be efficiently separated [3]. In addition, we can use the features learned in procedures of blind source separation as the prior information for supervised source separation. The self-adaptation mechanism during iterative learning can help search the similar sound source from other acoustic dataset contains unknown noise types. Our results suggest that the NMF-based source separation can facilitate the analysis of the soundscape variability and the establishment of audio recognition database. Therefore, it will be feasible to investigate the acoustic interactions among geophysical events, soniferous marine animals, and anthropogenic activities from long-duration underwater recordings.

Improving acoustic monitoring of biodiversity using deep learning-based source separation algorithms

Mao-Ning Tuanmu1, Tzu-Hao Lin2, Joe Chun-Chia Huang1, Yu Tsao3, Chia-Yun Lee1

1Biodiversity Research Center, Academia Sinica, Taiwan
2Department of Marine Biodiversity Research, Japan Agency for Marine-Earth Science and Technology, Japan
3Research Center for Information Technology Innovation, Academia Sinica, Taiwan

Passive acoustic monitoring of the environment has been suggested as an effective tool for investigating the dynamics of biodiversity across spatial and temporal scales. Recent development in automatic recorders has allowed environmental acoustic data to be collected in an unattended way for a long duration. However, one of the major challenges for acoustic monitoring is to identify sounds of target taxa in recordings which usually contain undesired signals from non-target sources. In addition, high variation in the characteristics of target sounds, co-occurrence of sounds from multiple target taxa, and a lack of reference data make it even more difficult to separate acoustic signals from different sources. To overcome this issue, we developed an unsupervised source separation algorithm based on a multi-layer (deep) non-negative matrix factorization (NMF). Using reference echolocation calls of 13 bat species, we evaluated the performance of the multi-layer NMF in separating species-specific calls. Results showed that the multi-layer NMF, especially when being pre-trained with reference calls, outperformed the conventional supervised single-layer NMF. We also evaluated the performance of the multi-layer NMF in identifying different types of bat calls in recordings collected in the field. We found comparable performance in call types identification between the multi-layer NMF and human observers. These results suggest that the proposed multi-layer NMF approach can be used to effectively separate acoustic signals of different taxa from long-duration field recordings in an unsupervised manner. The approach can thus improve the applicability of passive acoustic monitoring as a tool to investigate the responses of biodiversity to the changing environment.

Studying wildlife activities by using soundscape information

Oral presentation in 2018 ICEO & SI Conference

Studying wildlife activities by using soundscape information

Tzu-Hao Lin1, Yu Tsao2, Chun-Chia Huang3, and Mao-Ning Tuanmu3

1Department of Marine Biodiversity Research, Japan Agency for Marine-Earth Science and Technology
2Research Center for Information Technology Innovation, Academia Sinica
3Biodiversity Research Center, Academia Sinica

Information regarding biodiversity change is essential for the decision making of resource exploitation and conservation management. Studies on biodiversity are labor-intensive and time-consuming. Therefore, the development of a remote observation platform of wildlife is essential. In recent decades, passive acoustic monitoring has been widely employed to detect vocalizing animals. Besides, various environmental sounds and anthropogenic noises can also be recorded in a soundscape. Thus, a soundscape monitoring network has been considered as an acoustic sensing platform of the ecosystem. Although a significant amount of acoustic data can be collected, the acoustic data analysis remains a challenge for ecologists. In this study, we employed multiple layers of non-negative matrix factorization (NMF) to decompose a spectrogram into individual sound sources. Our results showed that echolocation calls produced from three different bat species can be effectively separated in an unsupervised manner. Even for overlapping signals, the deep NMF can still produce a reliable separation result. Therefore, the integration of NMF-based blind source separation and a soundscape monitoring network can reduce the difficulty of acoustic-based wildlife monitoring in the future.

The full text is available:
https://drive.google.com/file/d/1Llf9RuyeR4a7k36p_MMj9OZ2apaeZ_iz/view

Training workshop on the acoustical analysis of animal vocalizations

Time: 2018/07/04 (Wed)
Location: Biodiversity Research Center, Academia Sinica
Speaker: Dr. Tzu-Hao Lin (Department of Marine Biodiversity Research, JAMSTEC)

Morning Session: Passive acoustic monitoring
09:00-09:30: Registration
09:30-10:00: Passive acoustic monitoring of wildlife (Lecture)
10:00-10:30: Labeling of biosonar signal (Practice)
10:30-11:30: Automatic detection of biosonar activity (Practice)
11:30-12:00: Discussion
12:00-13:30: Lunch (自理)

Afternoon Session: Application of PAM in an offshore wind farm
13:30-14:00: Passive acoustic monitoring of soniferous marine animals in an offshore wind farm (Lecture)
14:00-15:00: Searching dolphin biosonars and fish sounds from long-duration recordings (Practice)
15:00-15:30: Break
15:30-16:00: Temporal analysis of acoustic detection results (Practice)
16:00-16:30: Discussion

Please go to this link for registering this training workshop. Due to the limited space, only 25 seats are available. The final attendant list will be determined by the organizer and only those successful registers will be notified by email.