Publications

Journal Articles


Conference Papers


Interpreting the Role of Visemes in Audio-Visual Speech Recognition

Published in Automatic Speech Recognition & Understanding (ASRU) 2025 Proceedings, 2025

Audio-Visual Speech Recognition (AVSR) models have surpassed their audio-only counterparts in terms of performance. However, the interpretability of AVSR systems, particularly the role of the visual modality, remains under-explored. In this paper, we apply several interpretability techniques to examine how visemes are encoded in AV-HuBERT a state-of-the-art AVSR model. First, we use t-distributed Stochastic Neighbour Embedding (t-SNE) to visualize learned features, revealing natural clustering driven by visual cues, which is further refined by the presence of audio. Then, we employ probing to show how audio contributes to refining feature representations, particularly for visemes that are visually ambiguous or under-represented. Our findings shed light on the interplay between modalities in AVSR and could point to new strategies for leveraging visual information to improve AVSR performance.

Recommended citation: Papadopoulos A. and Harte N. (2025) Interpreting the Role of Visemes in Audio-Visual Speech Recognition
Download Paper

What does an Audio-Visual Speech Recognition Model know about Visemes?

Published in UK & Ireland Speech Workshop 2025 Book of Abstracts, 2025

Phonemes are the basic speech unit with numerous studies exploring the inner workings of end-to-end transformer-based speech models, but they have mainly focused on Audio Speech Recognition (ASR). These studies have shown that there is significant phoneme capturing and encoding within the encoderlayers. The methodologies found in the literature include probing and the use of similarity measures, among others. Considerably less investigation into the interpretability of Audio-Visual Speech Recognition (AVSR) models has been done. In particular, no work has explored what AVSR models learn about visemes, the visual equivalent of phonemes. Our work therefore utilizes the concepts developed for ASR and applies them to AV-HuBERT, where a thorough analysis is performed to establish what the model learns about visemes.

Recommended citation: Papadopoulos, Aristeidis. (2025). "What does an Audio-Visual Speech Recognition Model know about Visemes?"
Download Paper