We are hiring! (Closed)

Chinese version.

If you are interested in using soundscapes to assess marine biodiversity or exploring the potential of ecological informatics in marine ecosystems, please contact us. We are hiring 2 research assistants this year!

Marine ecoacoustics and informatics lab (MEIL) is leading by Dr. Tzu-Hao Lin, a new assistant research fellow in the Biodiversity Research Center, Academia Sinica. The goal of MEIL is to facilitate the monitoring of marine ecosystems via remote sensing platforms and to understand how marine biodiversity changes in regional and global scales. Currently, MEIL mainly focuses on the application of machine learning in audio information retrieval. By analyzing the spatio-temporal variations of underwater sounds, we investigate how the ever-growing underwater noise impacts marine animals. Sounds of marine mammals, fishes, invertebrates, and various abiotic sound sources are all relevant to our research targets.

Our study areas including inshore waters, estuaries, coral reefs off Taiwan. Dr. Lin also works with researchers from Japan, Hong Kong, and the Philippines for the “Ocean Biodiversity Listening Project.” Therefore, you will have many chances to visit other countries for collaborative studies. In the future, MEIL will also cover visual-based ecological information and other remote sensing platforms. If you are interested in our research or hope to apply similar techniques in your career, please consider joining us!

Contents:

  1. Organize and analyze underwater recordings (or other type or dataset)
  2. Learn how to use and build tools for ecological informatics
  3. Writing research reports
  4. Assist administration work

Requirements:

  1. Willing to learn or capable of writing codes in Python, R, or Matlab
  2. Not afraid of seasick, and happy to execute field works
  3. Self-motivated and responsible, willing to record and share work details through the internet
  4. Having a background in biology, geology, physics, informatics, or engineering

The position will be started in the middle of April 2020. Following Academia Sinica’s regulation and standard (bachelor: NTD 34356+, master: NTD 40245+) but negotiable depending on experience. Insurance and benefits included. We will provide one laptop and relevant computing facilities during the work. We will also support attending domestic and international workshops and symposiums (mainly for scientific presentations).

To apply, please send the following materials to schonkopf@gmail.com (Please specify “Applications for research assistant” in the Subject):

  1. The reason why you think you fit this position and your career plan
  2. CV/Resume
  3. List of involved projects, publications, and presentations
  4. Contact information for 1-2 references (Please make sure your referees agree with the contact)

The Biodiversity Research Center offers a vigorous and diverse research environment. Please visit the website for more information:  http://www.biodiv.tw/en/.

海洋生態聲學與資訊研究室誠徵研究助理2名 (Closed)

海洋生態聲學與資訊研究室誠徵研究助理2名 (Closed)

English version.

你曾經聽過聲景(Soundscape)這個名詞嗎? 是否對於如何透過聆聽聲音來了解海面下的生態有興趣? 或是曾經想過要發展生態研究的自動化分析工具、互動平台嗎? 還是聽到許多人在談人工智慧(Artificial Intelligence)與機器學習(Machine Learning),可是卻不知道如何運用這些技術進行動物行為研究與生物多樣性監測嗎? 只要對於結合生態、資訊有興趣的朋友,都是我們有興趣的對象!

海洋生態聲學與資訊研究室為生物多樣性中心新進助研究員林子皓博士所設立的團隊,目標透過數位化監測平台來加速海洋生物多樣性監測,並探討人為活動所造成的區域性、全球性環境變遷對生物多樣性之衝擊。目前主要應用人工智慧與機器學習來解讀長時間的水下錄音,以瞭解在海洋噪音日漸增加的情境下,鯨豚、發聲魚類、無脊椎動物之群聚組成與多樣性會如何改變。調查地點包含,台灣本島、離島周遭近海、河口、珊瑚礁等多種生態系。林子皓博士目前也與台灣、日本、香港、菲律賓等地的研究團隊合作「海洋生物多樣性聆聽計畫」,因此也有機會至其他國家進行合作研究。未來除了水下聲音,也將會逐漸擴展到影像資料、衛星遙測與其他數位化監測平台的生態應用。歡迎對於我們的研究方向有興趣,或是希望應用類似技術在未來工作的朋友,一起加入這個新興團隊,學習各種資訊工具、探索海洋生物多樣性!

工作內容(對於提供的工作範圍及內容有任何想法,可進一步討論):

  1. 整理、分析水下錄音資料(或其他數位化生態資料)
  2. 學習資訊工具之使用
  3. 撰寫研究報告
  4. 研究室交辦行政事務

應徵資格:

  1. 對學習程式語言有興趣,或已經熟悉Python, R, MATLAB任一平台
  2. 不怕暈船願意偶爾去吹海風、看海、潛水(非必備技能),執行野外工作
  3. 負責任、積極進行團隊合作,願意透過網路、影片記錄、分享工作歷程
  4. 生物、地理、物理、資訊、電機相關科系畢業之學士/碩士
  5. 能流暢閱讀英文文獻,且具備基本的英文溝通和書寫能力

待遇:

  1. 依照中研院薪資標準(學士級NTD 34356+, 碩士級NTD 40,245+)、享勞健保及年終獎金
  2. 院內健檢、微風廣場、體育館職員優惠及享有中心圖書借用服務。
  3. 週休二日、依國定假日休假,特休按勞基法規定。
  4. 工作期間提供筆記型電腦與相關計算設備。
  5. 提供參加國內、國際學術會議發表研究成果的經費補助。
  6. 對於生物多樣性產業有興趣的人,將積極支援其能力培訓、養成,並介紹潛在合作夥伴。

工作時間:彈性上下班,表訂每日 09:00 至 18:00,午休一小時。

工作地點:中央研究院生物多樣性研究中心新溫室

到職時間:2020年4月中旬之後

聯絡人:林子皓(schonkopf@gmail.com),主旨請註明「應徵研究助理」確保收件,資格符合者,將另行通知Skype面談。

應備文件(請統整後放置於雲端資料夾中,並告知連結):

  1. 於電子郵件中說明為何你想要、為何適合這份工作,以及未來職涯規劃
  2. 履歷(須包含個人聯絡資訊、工作經歷、學歷、專長等)
  3. 過去參與之研究計畫、曾發表過之研究成果
  4. 推薦人姓名、在職單位、職稱與聯絡方法(1至2位,須事先徵詢推薦人)

近期發表的兩篇科普文章

聲景監測的成果,可以被應用在陸域森林生態,也可以被應用於海洋生態。最近,我們發表了兩篇中文的科普文章,第一篇是在林業研究專訊上的「亞洲聲景長期監測網」,該文簡介了聲景生態學在生態監測上的應用潛力,也介紹了近年來在台灣與東南亞所發展的亞洲聲景監測網路現況。

亞洲聲景長期監測網

第二篇文章是刊登於科學月刊上的「在爭議中尋求永續發展─離岸風電生態評估再進化」,該文探討台灣在目前的綠色能源政策下,眾多的離岸風場開發可能會改變生態系服務。但現行環評僅評估單一開發案的影響,無法進行有效的整體評估。我們建議以可重複性、高重現度為基準,來建立台灣的海洋生物多樣性監測網路。這個監測網路除了可以檢核離岸風能政策對西部海域生態的影響之外,也可能延伸出新興的生態資訊服務,作為海洋國家永續發展的基礎建設。

在爭議中尋求永續發展─離岸風電生態評估再進化

有興趣的朋友歡迎閱讀這兩篇文章,或與我聯絡。

A new review in Remote Sensing in Ecology and Conservation

Source separation in ecoacoustics: A roadmap towards versatile soundscape information retrieval

https://doi.org/10.1002/rse2.141

Tzu‐Hao Lin
Research Institute for Global Change, Japan Agency for Marine‐Earth Science and Technology (JAMSTEC)

Yu Tsao
Research Center for Information Technology Innovation, Academia Sinica

A comprehensive assessment of ecosystem dynamics requires the monitoring of biological, physical and social changes. Changes that cannot be observed visually may be trackable acoustically through soundscape analysis. Soundscapes vary greatly depending on geophysical events, biodiversity and human activities. However, retrieving source‐specific information from geophony, biophony and anthropophony remains a challenging task, due to interference by simultaneous sound sources. Audio source separation is a technique that aims to recover individual sound sources when only mixtures are accessible. Here, we review techniques of monoaural audio source separation with the fundamental theories and assumptions behind them. Depending on the availability of prior information about the source signals, the task can be approached as a blind source separation or a model‐based source separation. Most blind source separation techniques depend on assumptions about the behaviour of the source signals, and their performance may deteriorate when the assumptions fail. Model‐based techniques generally do not require specific assumptions, and the models are directly learned from labelled data. With the recent advances of deep learning, the model‐based techniques can yield state‐of‐the‐art separation performance, accordingly facilitate content‐based audio information retrieval. Source separation techniques have been adopted in several ecoacoustic applications to evaluate the contributions from biodiversity and anthropogenic disturbance to soundscape dynamics. They can also be employed as nonlinear filters to improve the recognition of bioacoustic signals. To effectively retrieve ecological information from soundscapes, source separation is a crucial tool. We believe that the future integrations of ecological hypotheses and deep learning can realize a high‐performance source separation for ecoacoustics, and accordingly improve soundscape‐based ecosystem monitoring. Therefore, we outline a roadmap for applying source separation to assist in soundscape information retrieval and hope to promote cross‐disciplinary collaboration.

Data Science School @ Kyoto University

On December 18, Dr. Tomonari Akamatsu and I will held a one-day workshop in the Data Science School of Kyoto University.

水中のビックデータ:音情報の取得と活用
Big data in the water: measurement and application of sound information

遠くまで見えない水中での遠隔探査には音波がつかわれてきました。かつて水中音響探査は、軍事や海底資源を対象とした国レベルの大規模な要請によってすすめられてきましたが、水中音響機器の小型化・大容量化と計算能力の飛躍的な進歩により、海洋の研究にも役立てられるようになってきました。たとえば生物が発する音を用いた生息分布の可視化や密度推定、サウンドスケープと騒音影響評価など、海洋生物の生態を調べ保全するためのツールとして音が注目されるようになってきました。一方で音そのものは見えないため、計測方法や解析法に数多くの落とし穴があります。それに気づかずに計測や解析を行うと、現実にはありえない結果が導かれます。

今回のデータサイエンススクールでは、水中音を正しく測る基礎と、長期多点観測で得られたビックデータから得られる海洋生物の生態情報を概観します。また、洋上風力発電、海底資源探査、船舶運輸などの騒音による生物影響評価に関する国際的な動きについても触れます。環境音の自動分類についての先端的な解析手法は、続くLin Tzuhao氏のセミナーで紹介されます。

This course will introduce basic techniques of audio information retrieval by using a Python-based open toolbox (Soundscape Viewer). Our goal is to visualize nature soundscapes and explore acoustic diversity from long-duration field recordings. This course will also introduce how to build an open science project by using Google Colaboratory and GitHub.

International Workshop on Coral Reef Resilience In The Changing Climate @ Academia Sinica

2019 International Workshop on Coral Reef Resilience In The Changing Climate will be held in Academia Sinica, Taipei, Taiwan from Dec 6-7. The theme this year is Coral Reef Interdependence and Governance. I will be there to give a talk. Please join us if you are interested!

Exploring the dynamics of coral reef ecosystem through soundscapes

Coral reefs host a diverse array of marine organisms, and the high biodiversity offers local communities the revenues from fisheries and tourism. However, the services provided by coral reefs are vulnerable to natural and anthropogenic impacts. It is thus critical to track trends in marine biodiversity and ecosystem services of coral reefs, as these are vital to managers and stakeholders to identify areas with declining reef health or periods that are vulnerable to disturbances. Despite decades of experience, monitoring of reef biodiversity remains a challenging task, in part because conventional visual and diver-based survey methods are intermittent and expensive. Soundscape-based ecosystem monitoring can serve as a new tool for marine conservation, given that many biological, geophysical, and anthropogenic activities generate underwater sounds. In this presentation, the first part will introduce an open toolbox of soundscape information retrieval, the Soundscape Viewer, which includes three analysis modules: (1) visualization of long-duration recordings, (2) audio source separation, (3) identification of audio events. The second part of this presentation will feature the coral reef soundscapes at Okinawa, Japan, and Cebu, Philippines. Based on the Soundscape Viewer, it is possible to characterize habitat-specific soundscape, evaluate the spatial-temporal variations of acoustic diversity, and investigate the interactions between different sound sources. I hope to invite reef scientists to join the Ocean Biodiversity Listening Project, which aims to collect long-duration underwater sounds from marine ecosystems. With open tools and open data, managers and stakeholders will be able to predict the change of reef biodiversity and ecosystem service in the future.

從海洋聲景探索珊瑚礁生態系之動態變化

珊瑚礁豐富的生態提供許多生態系服務,但這些生態系服務卻很容易受到氣候變遷與人為干擾影響。因此,如何追蹤珊瑚礁生物多樣性和生態系服務,以協助鑑別劣化的珊瑚礁或是容易受干擾的時期,便成為重要的生態保育工作項目。到目前為止,珊瑚礁的生物多樣性調查仍相當困難,主要是因為基於目視、潛水的傳統調查方法需耗費大量的人力、物力,且無法產出高時間解析度的資料。近年來,透過聆聽海洋中的生物音、環境音與人為噪音,以海洋聲景為基礎的生態系監測已經被提倡為一個新的海洋生態保育工具。本研究將介紹Soundscape Viewer,一個聲景訊息擷取的開放工具集。這個開放工具集包含三個主要的分析模組:(1) 長時間錄音之視覺化分析、(2) 聲音訊號分離、(3) 聲音事件之自動識別。本研究應用聲景訊息擷取技術分析日本沖繩、菲律賓宿霧兩地之珊瑚礁聲景特性、評估聲音多樣性的時空變化趨勢,並調查不同聲源之間的交互作用。我們希望能夠邀請珊瑚礁科學家加入海洋生物多樣性聆聽計畫,收集寬頻、長時間的水下錄音,透過開放資料與開放工具,協助珊瑚礁生態經營管理者和權益關係人透過聲景訊息預測生物多樣性和生態系服務之變遷。

Learning discriminative bioacoustic features by deep-listening wildlife sounds

Oral presentation in The 6th Annual Meeting of the Society for Bioacoustics

Learning discriminative bioacoustic features by deep-listening wildlife sounds

Tzu-Hao Lin1, Yu Tsao2, Mao-Ning Tuanmu3

1 Marine Biodiversity and Environmental Assessment Research Center, Research Institute for Global Change, Japan Agency for Marine-Earth Science and Technology, 2-15, Natsushima, Yokosuka City, Kanagawa, 237-0061, Japan

2 Research Center for Information Technology Innovation, Academia Sinica, 128 Academia Road, Section 2, Nankang, Taipei 115, Taiwan

3 Biodiversity Research Center, Academia Sinica, 128 Academia Road, Section 2, Nankang, Taipei 115, Taiwan

Finding acoustic features that can reliably distinguish species of vocalizing animals or individual behaviors is critical for the analysis of bioacoustic signals, such as to understand the evolution and function of bioacoustic traits. Most of the previous studies characterized animal vocalizations by extracting hand-crafted features from the signal envelope or spectral representations, but how to appropriately select acoustic features remains unclear. Recently, studies apply deep neural networks have embraced a data-oriented approach to learn acoustic features from training data instead of designing hand-crafted features. Considerable improvements have been achieved in audio information retrieval. However, there is still a difficulty to understand acoustic features learned in deep neural networks due to the low model interpretability. In addition to deep neural networks, another deep-learning framework based on multiple layers of non-negative matrix factorization (ML-NMF) has been preliminarily tested in the analysis of bioacoustic signals. The ML-NMF can learn to reconstruct the input data by using a small set of encoding vectors. Basis functions learned in shallow layers represent the acoustic dictionary for characterizing signals of interests. Basis functions in deeper layers describe the temporal interactions among basis functions in the shallow layers. The ML-NMF can learn temporal activations from different animal vocalizations by only giving the number of sources without any training labels. It can also learn from training labels which contains the occurrence of animal vocalizations. When the ML-NMF effectively learn the temporal activations, it delivers a set of discriminative features for animal vocalizations. In this presentation, we will demonstrate the learning capability of ML-NMF on animal vocalizations in different types of audio data and recording conditions. We hope to invite bioacoustic researchers to test the ML-NMF in the analysis of inter- and intra-specific variations and discuss how similar techniques can leverage the passive acoustic monitoring of wildlife.

International Sound Measurement Workshop

Date: November 19, 2019
Location: Yokohama Institute of Earth Science, JAMSTEC, Japan
Venue: Information Science building 404 (4th floor)

The effect of noise on aquatic animals is an international issue in recent years. Seismic surveys including oil and gas exploitation, shipping and development of renewable energy could cause intensive or long-term noise exposure in the water. Aquatic animals listen to underwater soundscapes in assisting the dispersal, habitat selection, and communication. However, guidance for the measurement of anthropogenic low-frequency sounds has not reached an international agreement yet. In this workshop, measurement and analysis of low-frequency sound and calibration is introduced by Dr. Latha Basker from the National Institute of Ocean Technology, India. Presentations of the soundscape analysis for ocean biodiversity and the effects of noise on marine species will be followed.

Program

13:10 Opening remarks

13:15 Latha Baskar (NIOT)
Underwater acoustic transducer testing and calibration – Indian facility with International standards

14:15 Tzu-hao Lin (JAMSTEC)
Ocean Biodiversity Listening Project

14:45 Tomonari Akamatsu (FRA)
Underwater noise issue and environmental assessment

15:15 Discussion

16:00 Closing remarks

A new article in Trends in Ecology & Evolution

Using Soundscapes to Assess Deep-Sea Benthic Ecosystems

https://doi.org/10.1016/j.tree.2019.09.006

Tzu-Hao Lin, Chong Chen, Hiromi Kayama Watanabe, Shinsuke Kawagucci, Hiroyuki Yamamoto
Japan Agency for Marine-Earth Science and Technology (JAMSTEC)

Tomonari Akamatsu
National Research Institute of Fisheries Science, Japan Fisheries Research and Education Agency

Targets of deep-sea mining commonly coincide with biodiversity hotspots, such as hydrothermal vents. The resilience of these ecosystems relies on larval dispersal, which may be directed by habitat-specific soundscapes. We urge for a global effort to implement soundscape as a conservation tool to assess anthropogenic disruption to deep-sea benthic ecosystems.

gr1

Characterizing diversity of fish sounds using audio information retrieval

Oral presentation in The Thirteen Annual Meeting of Asian Fisheries Acoustics Society

Characterizing diversity of fish sounds using audio information retrieval

Tzu-Hao Lin1, Tomonari Akamatsu2, and Colin Kuo-Chang Wen3

1Research Institute for Global Change, Japan Agency for Marine-Earth Science and Technology (JAMSTEC)
2National Research Institute of Fisheries Science, Japan Fisheries Research and Education Agency
3Department of Life Science, Tunghai University

Dynamics of fish community represents the crucial information for evaluating the sustainable use of fishery resources. Many marine ecosystems, such as coral reefs and estuaries, harbor a diverse array of soniferous fish, which produce various types of sounds for mating, territorial defending and communication. Fish sounds have been considered as the acoustic cue to assist in the remote sensing of the fish community. Despite that, a soundscape contains various environmental, biological, and anthropogenic sounds and make the acoustical analysis challenging. Moreover, a comprehensive audio database for identifying different species of soniferous fish is still not available. To overcome these challenges, we integrated tools of audio information retrieval in the analyses of marine soundscapes even no prior information of the sound source is available. At first, we employed long-term spectrograms to visualize the spectral-temporal variations of long-duration underwater recordings. We then integrated the constraint of diurnal periodicity in audio source separation to identify fish choruses. Finally, we were able to extract the transient fish sounds based on the prominent patterns of inter-pulse intervals. In this presentation, we will demonstrate the feasibility of applying these tools of audio information retrieval in the diversity assessment of fish sounds in marine ecosystems, including algal reefs, coral reefs, estuaries, and continental shelf. In the future, these tools will be deployed on cloud computation platforms and connected to online archives of soundscape recordings. A scalable information retrieval platform of marine soundscapes will become possible and contribute to the conservation management of soniferous fish and their marine ecosystems.