Model Card for ReVerT (Think, Verbalize, then Speak)
This model implements the ReVerT verbalizer, a core component of the Think-Verbalize-Speak (TVS) framework, introduced in the paper Think, Verbalize, then Speak: Bridging Complex Thoughts and Comprehensible Speech.
Model Details
Model Description
Spoken dialogue systems increasingly employ large language models (LLMs) to leverage their advanced reasoning capabilities. However, direct application of LLMs in spoken communication often yields suboptimal results due to mismatches between optimal textual and verbal delivery. While existing approaches adapt LLMs to produce speech-friendly outputs, their impact on reasoning performance remains underexplored.
The Think-Verbalize-Speak framework decouples reasoning from spoken delivery to preserve the full reasoning capacity of LLMs. Central to this method is verbalizing, an intermediate step that translates complex thoughts into natural, speech-ready text. This model, ReVerT, is a latency-efficient verbalizer based on incremental and asynchronous summarization. Experiments across multiple benchmarks show that this method enhances speech naturalness and conciseness with minimal impact on reasoning.
- Developed by: Sang Hoon Woo, Sehun Lee, Kang-wook Kim, Gunhee Kim
- Model type: Qwen2ForCausalLM fine-tuned as a verbalizer for text generation.
- Language(s) (NLP): English
- License: No explicit license found in the provided sources, please refer to the original project for license information.
- Finetuned from model: Qwen/Qwen2.5-0.5B-Instruct
Model Sources
- Repository: https://github.com/yhytoto12/TVS-ReVerT
- Paper: https://huggingface.co/papers/2509.16028
- Project Page: https://yhytoto12.github.io/TVS-ReVerT
π₯ News
2025.09.22
π We released our paper on arXiv.2025.09.19
π₯ We released the training code, datasets, models, and interactive demo.2025.08.21
π Our paper got accepted to EMNLP 2025!
π Introduction
Uses
Direct Use
This model is intended to be used as a "verbalizer" within a spoken dialogue system. Its primary purpose is to convert complex, often structured, "thoughts" generated by a Large Language Model into natural, concise, and speech-ready text that can then be fed into a Text-to-Speech (TTS) system. This ensures that the full reasoning capacity of the LLM is preserved while optimizing the output for verbal delivery.
Out-of-Scope Use
This model is not designed for direct end-to-end reasoning or speech synthesis. It specifically focuses on the text-to-text verbalization step. It should not be used as a standalone reasoning engine, nor should its outputs be directly consumed by users without further processing (e.g., TTS).
Bias, Risks, and Limitations
- The model's performance and potential biases are influenced by the underlying base LLM (Qwen2.5) and the characteristics of the training datasets (GSM8k, 2WikiMultihopQA).
- While designed for naturalness and conciseness, the quality of verbalization might vary depending on the complexity and domain of the input "thoughts."
- The model's effectiveness is contingent on its integration into a larger Think-Verbalize-Speak framework, including a robust "Think" model and a speech synthesizer.
Recommendations
Users should be aware of these limitations and consider the potential for biases inherited from the training data and base models. Thorough evaluation in target deployment scenarios is recommended, especially for sensitive applications.
How to Get Started with the Model
You can try the interactive demo for the Think-Verbalize-Speak framework, which utilizes this ReVerT verbalizer. The setup instructions from the GitHub repository are provided below.
First, set up the environment:
git clone https://github.com/yhytoto12/TVS-ReVerT.git
cd TVS-ReVerT
conda create -n tvs python=3.10
conda activate tvs
pip install -r requirements.txt
# Use flash attention for faster training and inference (optional)
pip install -U flash-attn --no-build-isolation
# For deepspeed training (optional)
pip install deepspeed
Then, run the interactive demo using one of the following commands:
Using OpenAI models as the Think model:
python demo.py --think_model <openai_model_name> --verbalize_model yhytoto12/revert-Qwen2.5-0.5B --use_openai_think
Using local models as the Think model (with vLLM backend): First, start the vLLM backend in one terminal:
python -m vllm.entrypoints.transformers --model Qwen/Qwen2.5-7B-Instruct --host 0.0.0.0 --port 8000
Then, run the demo in a separate terminal:
python demo.py --think_model Qwen/Qwen2.5-7B-Instruct --verbalize_model yhytoto12/revert-Qwen2.5-0.5B --vllm_url http://localhost:8000/v1
Training Details
Training Data
The ReVerT verbalizer models were trained using specialized datasets containing thought-verbalization pairs. These datasets are available on Hugging Face:
Training Procedure
Training scripts for the various models discussed in the paper, including the ReVerT verbalizer, are provided in the GitHub repository under the scripts/
directory. The default base model for training is Qwen/Qwen2.5-3B-Instruct
, which can be modified within the training scripts.
Example script for training the TVS(ReVerT) Model:
bash scripts/train_tvs_revert.sh -g <num_gpus>
Training Hyperparameters
Specific training hyperparameters would be found within the scripts/train_tvs_revert.sh
script and associated configuration files in the GitHub repository.
Evaluation
The paper details experiments across multiple benchmarks showing that the Think-Verbalize-Speak method, including ReVerT, enhances speech naturalness and conciseness with minimal impact on reasoning performance. Refer to the paper for comprehensive evaluation protocols and results.
Citation
If you find our project useful for your research and applications, please kindly cite using this BibTeX:
@inproceedings{tvs2025@woolee,
title={Think, Verbalize, then Speak: Bridging Complex Thoughts and Comprehensible Speech},
author={Sang Hoon Woo, Sehun Lee, Kang-wook Kim, Gunhee Kim},
booktitle={Proceedings of the EMNLP 2025},
year={2025}
}
- Downloads last month
- 47