The rapid advancement of speech-to-speech (S2S) large language models (LLMs) has significantly improved real-time spoken interaction. However, current evaluation frameworks remain inadequate for assessing performance in complex, multi-turn dialogues. To address this, we introduce MTalk-Bench, a multi-turn S2S benchmark covering three core dimensions: Semantic Information, Paralinguistic Information, and Ambient Sound. Each dimension includes nine realistic scenarios, along with targeted tasks to assess specific capabilities such as reasoning. Our dual-method evaluation framework combines Arena-style evaluation (pairwise comparison) and Rubrics-based evaluation (absolute scoring) for relative and absolute assessment. The benchmark includes both model and human outputs, evaluated by human evaluators and LLMs. Experimental results reveal two sets of findings. Overall performance of S2S LLMs: (1) models excel at semantic information processing yet underperform on paralinguistic information and ambient sounds perception; (2) Models typically regain coherence by increasing response length, sacrificing efficiency in multi-turn dialogues; (3) modality-aware, task-specific designs outperform brute scaling. Evaluation framework and reliability: (1) Arena and Rubrics yield consistent, complementary rankings, but reliable distinctions emerge only when performance gaps are large; (2) LLM-as-a-judge aligns with humans when gaps are clear or criteria explicit, but exhibits position and length biases and is reliable on nonverbal evaluation only with text annotations. These results highlight current limitations in S2S evaluation and the need for more robust, speech-aware assessment frameworks.
MTalk-Bench spans Semantic, Paralinguistic, and Ambient dimensions, with a two-level capability taxonomy (foundational capabilities → fine-grained capabilities). Nine real-world scenarios are linked via a scenario-capability mapping to ensure ecological validity while keeping each dialogue diagnostic by design.
We use complementary protocols: Arena-style evaluation (blind pairwise preferences aggregated as Elo for relative ordering) and Rubric-based evaluation (hierarchical, criteria-based scoring for absolute diagnostics). Both human raters and LLM-as-a-judge are employed to do the evaluation.
sample 1
Comprehension & Memorysample 2
Reasoning & Executionsample 3
Security & AssessmentSet 1-1
Emotion Recognitionsample 2
Personalized Modelingsample 3
Paralinguistic Feature GenerationSet 1-1
Ambient Sound Understandingsample 2
Ambient Sound Understandingsample 3
Multi-party Understanding@article{mtalkbench2025,
title={MTalk-Bench: Evaluating Speech-to-Speech Models in Multi-Turn Dialogues via Arena-style and Rubrics Protocols},
author={Yuhao Du, Qianwei Huang, Guo Zhu, Zhanchen Dai, Sunian Chen, Qiming Zhu, Yuhao Zhang, Li Zhou, and Benyou Wang},
year={2025},
url={https://freedomintelligence.github.io/MTalk-Bench/}
}