Format frequency tuning of three-dimensional MRI-based vocal tracts for the finite element synthesis of vowels
View/Open
Other authors
Publication date
2024-05ISSN
2329-9304
Abstract
Nowadays it is possible to acquire three-dimensional (3D) geometries of the vocal tract by means of Magnetic Resonance Imaging (MRI) and introduce them into a 3D acoustic model to obtain their frequency response. However, it is not yet feasible to consider small variations in these geometries to tune the formant frequencies and produce, for instance, specific voice or singing effects. This work presents a methodology for doing so. First, the 3D MRI-based vocal tract geometry is discretized into a set of cross-sections from which its 1D area function is calculated. A 1D tuning algorithm based on sensitivity functions is then used to iteratively perturb the area function until the desired target frequencies are obtained. The algorithm also allows to consider vocal tract length perturbations between cross-sections. Finally, a 3D
vocal tract is reconstructed using the shape of the original cross-sections with the new areas, and the new spacing between cross-sections according to the computed length variations. In this way it is possible to produce voice and singing effects by tuning the first formants of vowels while keeping the high energy content of the spectrum of 3D models at a low computational cost. Several examples are presented, ranging from the shift to lower and higher frequencies of the first formant (F1) to the generation of formant clusters. The latter is the case of the singing formant, in which F3, F4 and F5 are grouped together, or the case of singing formant, where F2 and F3 form a single peak to generate an overtone above the fundamental frequency.
Document Type
Article
Document version
Accepted version
Language
English
Subject (CDU)
378 - Higher education. Universities. Academic study
62 - Engineering. Technology in general
8 - language. Linguistics. Literature
Keywords
3D vocal tract
Pages
10 p.
Publisher
IEEE
Is part of
IEEE/ACM Transactions on Audio Speech and Language Processing, 2024. Vol. 32, p. 2790-2799
This item appears in the following Collection(s)
Rights
© IEEE. Tots els drets reservats