2019 INTERNATIONAL WORKSHOP ON MARINE SOUNDSCAPE

Please fill the registration form in the event website.

Soundscape_Workshop_Flyer_P1

Advertisements

Applications of the hypothesis-driven approach of soundscape information retrieval

In this Soundscape Conference, we had several presentations using the hypothesis-driven approach of soundscape information retrieval (SIR) to analyze the spatiotemporal dynamics of marine/terrestrial soundscapes.

  1. Tomonari AKAMATSU, Tzu-Hao LIN, Frederic SINNIGER, Saki HARII. Soundscape phenology in coral reef.
  2. Colin Wen. Human disturbance on fish, fisheries and marine soundscape in an intertidal coralline algal reef, Taoyuan.
  3. Florence EVACITAS, Tzu-Hao LIN. Coral reef soundscapes of Cebu, Philippines: Initial results and future directions.
  4. Shih-Ching YEN, Tzu-Hao LIN, Pei-Jen LEE SHANER. An application of acoustic and visual information to monitor activities of sika deer.

Besides, we also have one presentation related to the chronic effect of underwater noise on fish.

  1. Yi Ta SHAO, Tzu-Hao LIN. Listening to underwater noise: impacts of chronic noise exposure on fishes.

Evaluating changes in the marine soundscape of an offshore wind farm via the machine learning-based source separation

Oral presentation in Underwater Technology 2019 Kaohsiung

Evaluating changes in the marine soundscape of an offshore wind farm via the machine learning-based source separation

Tzu-Hao Lin1, Hsin-Te Yang2, Jie- Mao Huang2, Chiou-Ju Yao3, Yung-Shun Lien4, Pei-Jung Wang4, Fang-Yu Hu4

1Japan Agency for Marine-Earth Science and Technology (JAMSTEC), Japan
2Observer Ecological Consultant, Taiwan
3Nature Museum of Natural Science, Taiwan
4Industrial Technology Research Institute, Taiwan

Investigating the ecological effects of offshore wind farms requires comprehensive surveys of marine ecosystem. Recently, the monitoring of marine soundscapes has been included in the rapid appraisals of geophysical events, marine fauna, and human activities. Machine learning is widely applied in acoustic research to improve the efficiency of audio processing. However, the use of machine learning to analyze marine soundscapes remains limited due to a general lack of human-annotated databases. In this study, we used unsupervised learning to recognize different sound sources underwater. We also quantified the temporal, spatial, and spectral variabilities of long-term underwater recordings collected near Phase I of the Formosa I wind farm. One source separation model was developed to recognize choruses made by fish and snapping shrimp, as well as shipping noise. Another model was developed to identify transient fish calls and echolocation clicks of marine mammals. Models were trained in an unsupervised manner using the periodicity-coded non-negative matrix factorization. After the sound sources were separated, events can be identified using Gaussian mixture models. Our information retrieval techniques facilitate future investigations of the spatiotemporal changes in marine soundscapes and allow to build an annotated database efficiently. The soundscape information can be used to evaluate the potential impacts of noise-generating activities on soniferous marine animals and their acoustic behavior before, during, and after the development of offshore wind farms.

Listening to the ecosystem: an integrative approach of informatics and ecoacoustics

We will attend the Symposium of Integrative Biology: Biodiversity in Asia, which is held at Kyoto University next week, to present our recent progress in the information retrieval of marine soundscapes.

Last year, we published the toolbox of Soundscape Viewer, which aims to help researchers evaluate changes in the marine soundscape without any recognition database. In this poster, we summarize the general procedures of soundscape information retrieval and demonstrate the application of studying the phenology of soniferous marine animals in estuarine waters, upper mesophotic corals, and continental shelf environments.

Kyoto_BIOD_poster

You can also find the pdf copy of this poster here.

The soundscape is a mixture of biophony, geophony, and anthrophony, thus, the disentanglement of different sound sources will be a key step in exploring the complexity of soundscape. Our work is still in the beginning phase, collaborations in ecoaoustics, acoustic information retrieval, or similar subjects are very welcome!

ASA Victoria presentations are now available online!

Two presentations were given during the ASA Victoria. Now you can access the slides and recordings by ASA website:

Machine Learning and Data Science Approaches in Ocean Acoustics II:
Applying machine-learning based source separation techniques in the analysis of marine soundscapes

Hot Topics in Acoustics:
Information retrieval from a soundscape by using blind source separation and clustering

Feel free to contact me if you have any question.

New article online: Robust S1 and S2 heart sound recognition based on spectral restoration and multi-style training

Robust S1 and S2 heart sound recognition based on spectral restoration and multi-style training

Biomedical Signal Processing and Control, 49: 173-180 (2019)
https://www.sciencedirect.com/science/article/pii/S1746809418302787?via%3Dihub

Yu Tsao, Tzu-Hao Lin
Research Center for Information Technology Innovation (CITI) at Academia Sinica, Taipei, Taiwan

Fei Chen
Department of Electrical and Electronic Engineering, Southern University of Science and Technology, Xueyuan Road 1088#, Xili, Nanshan District, Shenzhen, China

Yun-Fan Chang, Chui-Hsuan Cheng, Kun-Hsi Tsai
iMediPlus Inc., Hsinchu, Taiwan

Recently, we have proposed a deep learning based heart sound recognition framework, which can provide high recognition performance under clean testing conditions. However, the recognition performance can notably degrade when noise is present in the recording environments. This study investigates a spectral restoration algorithm to reduce noise components from heart sound signals to achieve robust S1 and S2 recognition in real-world scenarios. In addition to the spectral restoration algorithm, a multi-style training strategy is adopted to train a robust acoustic model, by incorporating acoustic observations from both original and restored heart sound signals. We term the proposed method as SRMT (spectral restoration and multi-style training). The experimental procedure in this study is described as follows: First, an electronic stethoscope was used to record actual heart sounds, and the noisy signals were artificially generated at different signal-to-noise-ratios (SNRs). Second, an acoustic model based on deep neural networks (DNNs) was trained using original heart sounds and heart sounds processed through spectral restoration. Third, the performance of the trained model was evaluated using the following metrics: accuracy, precision, recall, specificity, and F-measure. The results confirm the effectiveness of the proposed method for recognizing heart sounds in noisy environments. The recognition results of an acoustic model trained on SRMT outperform that trained on clean data with a 2.36% average accuracy improvement (from 85.44% and 87.80%), over clean, 20dB, 15dB, 10dB, 5dB, and 0dB SNR conditions; the improvements are more notable in low SNR conditions: the average accuracy improvement is 3.87% (from 82.83% to 86.70%) in the 0dB SNR condition.

Before February 03, 2019, you can download the pdf copy from this link.

Toolbox online: Soundscape_Viewer

Last year, we published the periodicity-coded nonnegative matrix factorization (PC-NMF). It has been demonstrated to work in various ecosystems. Recently, we have integrated the PC-NMF and k-means clustering in the toolbox of soundscape information retrieval. On the basis of this toolbox, one can explore the variability of soundscape and recognize different audio events without a recognition database.

Please  go to the following link to download the codes of Soundscape_Viewer. If you register an account on CodeOcean, you will be able to upload your long-term spectrograms and run the analysis on the cloud. If you have a MATLAB license, you can also execute Soundscape_viewer.m to initiate a graphical user interface.

https://codeocean.com/2018/11/16/demonstration-of-soundscape-separation-by-using-the-soundscape-viewer/code