MTalk-Bench: Evaluating Speech-to-Speech Models in Multi-Turn Dialogues via Arena-style and Rubrics Protocols

Yuhao Du*   Qianwei Huang*   Guo Zhu   Zhanchen Dai   Sunian Chen
Qiming Zhu   Yuhao Zhang   Li Zhou   Benyou Wang
School of Data Science, The Chinese University of Hong Kong, Shenzhen
https://freedomintelligence.github.io/MTalk-Bench/
* Equal contribution. Corresponding author.

Abstract

The rapid advancement of speech-to-speech (S2S) large language models (LLMs) has significantly improved real-time spoken interaction. However, current evaluation frameworks remain inadequate for assessing performance in complex, multi-turn dialogues. To address this, we introduce MTalk-Bench, a multi-turn S2S benchmark covering three core dimensions: Semantic Information, Paralinguistic Information, and Ambient Sound. Each dimension includes nine realistic scenarios, along with targeted tasks to assess specific capabilities such as reasoning. Our dual-method evaluation framework combines Arena-style evaluation (pairwise comparison) and Rubrics-based evaluation (absolute scoring) for relative and absolute assessment. The benchmark includes both model and human outputs, evaluated by human evaluators and LLMs. Experimental results reveal two sets of findings. Overall performance of S2S LLMs: (1) models excel at semantic information processing yet underperform on paralinguistic information and ambient sounds perception; (2) Models typically regain coherence by increasing response length, sacrificing efficiency in multi-turn dialogues; (3) modality-aware, task-specific designs outperform brute scaling. Evaluation framework and reliability: (1) Arena and Rubrics yield consistent, complementary rankings, but reliable distinctions emerge only when performance gaps are large; (2) LLM-as-a-judge aligns with humans when gaps are clear or criteria explicit, but exhibits position and length biases and is reliable on nonverbal evaluation only with text annotations. These results highlight current limitations in S2S evaluation and the need for more robust, speech-aware assessment frameworks.

Framework & Evaluation

MTalk-Bench spans Semantic, Paralinguistic, and Ambient dimensions, with a two-level capability taxonomy (foundational capabilities → fine-grained capabilities). Nine real-world scenarios are linked via a scenario-capability mapping to ensure ecological validity while keeping each dialogue diagnostic by design.

We use complementary protocols: Arena-style evaluation (blind pairwise preferences aggregated as Elo for relative ordering) and Rubric-based evaluation (hierarchical, criteria-based scoring for absolute diagnostics). Both human raters and LLM-as-a-judge are employed to do the evaluation.

Audio Demos

Semantic

sample 1

Comprehension & Memory

sample 2

Reasoning & Execution

sample 3

Security & Assessment

Paralinguistic

Set 1-1

Emotion Recognition

sample 2

Personalized Modeling

sample 3

Paralinguistic Feature Generation

Ambient

Set 1-1

Ambient Sound Understanding

sample 2

Ambient Sound Understanding

sample 3

Multi-party Understanding

Results Analysis

Experiment Results — Model Performance

  • Dimension gap: Top models have not yet reached high proficiency. They are overall strong in semantics but limited in some specific semantic capability like safety reasoning and auditory cues (paralinguistic and ambient sound perception).
  • Dialogue dynamics: Models recover from an early-turn quality dip by producing longer responses, trading efficiency for coherence. After a minimal length for clarity, more tokens often mean more fluff, not better answers.
  • Design matters: When paired with large capacity, task specific designs deliver bigger gains than scale alone.

Meta-Evaluation — Methods & Reliability

  • Arena × Rubrics: Broadly consistent and complementary — relative ordering vs. absolute diagnostics.
  • Small margins: When gaps are small, outcomes remain unstable even with more pairwise comparisons; robust conclusions appear with large gaps.
  • LLM-as-judge: Agreement with humans improves when differences are clear or criteria are explicit, but LLMs show position and length biases; for nonverbal acoustic cues, reliability improves with textual annotations.
  • Reliability: Good internal consistency across protocols; rubric-based assessments show stable correlations under ablations/bootstraps.

BibTeX

@article{mtalkbench2025,
  title={MTalk-Bench: Evaluating Speech-to-Speech Models in Multi-Turn Dialogues via Arena-style and Rubrics Protocols},
  author={Yuhao Du, Qianwei Huang, Guo Zhu, Zhanchen Dai, Sunian Chen, Qiming Zhu, Yuhao Zhang, Li Zhou, and Benyou Wang},
  year={2025},
  url={https://freedomintelligence.github.io/MTalk-Bench/}
}