Computational Analysis of Georgian Vocal Music and Beyond (MU 2686/13-2 (No. 401198673))

Third party funded individual grant


Acronym: MU 2686/13-2 (No. 401198673)

Start date : 01.01.2023

End date : 31.12.2025

Website: https://www.audiolabs-erlangen.de/fau/professor/mueller/projects/gvm+


Project details

Short description

In the project's first phase (initial proposal), our main objective was to advance ethnomusicological research focusing on traditional Georgian vocal music by employing computational methods from audio signal processing and music information retrieval (MIR). By developing novel computational tools applied to a concrete music scenario, we explored the potential of computer-assisted methods for reproducible and corpus-driven research within the humanities. Furthermore, by systematically processing and annotating unique collections of field recordings, we contributed to the preservation and dissemination of the rich Georgian musical heritage. In the second phase of the project (renewal proposal), we broaden our perspective and set ourselves new goals. First, we will systematically expand and improve our computational tools for analyzing vocal music by combining traditional model-based and recent data-driven approaches. In particular, we want to achieve substantial progress in notoriously difficult MIR tasks such as estimating multiple fundamental frequencies and analyzing harmonic and melodic intonation aspects in polyphonic singing. To explore the scalability and applicability of our methods, we go beyond traditional Georgian vocal music and consider other corpora of recorded singing, including Western choral music, children's songs, and traditional music from different musical cultures. Another fundamental goal for the project's second phase is to explore the potential of novel contact microphones that overcome some limitations of the previously used headset and larynx microphones. We plan to use sensors to minimize external acoustic noise while offering high sensitivity to body vibrations in a frequency range between a few Hertz and 2200 Hertz. Comprising the fundamental frequency of the vibrations caused by the larynx (as well as several overtones), this extensive frequency range enables the analysis of speech and singing as well as of body vibrations as low as the heartbeat. Such novel technology will lay the basis for generating high-quality training data as required for recent MIR techniques based on deep learning and open new paths for investigating how singers synchronize some of their body functions (e.g., heartbeat variability, respiration) during singing.

Involved:

Contributing FAU Organisations:

Funding Source