Oute A I
Important Sampling Considerations
When using OuteTTS version 1.0, it is crucial to use the settings specified in the Sampling Configuration section. The repetition penalty implementation is particularly important - this model requires penalization applied to a 64-token recent window, rather than across the entire context window. Penalizing the entire context will cause the model to produce broken or low-quality output.
To address this limitation, all necessary samplers and patches for all backends are set up automatically in the outetts library. If using a custom implementation, ensure you correctly implement these requirements.
OuteTTS Version 1.0
This update brings significant improvements in speech synthesis and voice cloning—delivering a more powerful, accurate, and user-friendly experience in a compact size.
OuteTTS Python Package v0.4.2
New version adds batched inference generation with the latest OuteTTS release.
⚡ Batched RTF Benchmarks
Tested with NVIDIA L40S GPU
Quick Start Guide
Getting started with OuteTTS is simple:
Installation
Basic Setup
from outetts import Interface, ModelConfig, GenerationConfig, Backend, InterfaceVersion, Models, GenerationType
# Initialize the interface
interface = Interface(
ModelConfig.auto_config(
model=Models.VERSION_1_0_SIZE_0_6B,
backend=Backend.HF,
)
)
# Load the default **English** speaker profile
speaker = interface.load_default_speaker("EN-FEMALE-1-NEUTRAL")
# Or create your own speaker (Use this once)
# speaker = interface.create_speaker("path/to/audio.wav")
# interface.save_speaker(speaker, "speaker.json")
# Load your speaker from saved file
# speaker = interface.load_speaker("speaker.json")
# Generate speech & save to file
output = interface.generate(
GenerationConfig(
text="Hello, how are you doing?",
speaker=speaker,
)
)
output.save("output.wav")
⚡ Batch Setup
from outetts import Interface, ModelConfig, GenerationConfig, Backend, GenerationType
if __name__ == "__main__":
# Initialize the interface with a batch-capable backend
interface = Interface(
ModelConfig(
model_path="OuteAI/OuteTTS-1.0-0.6B-FP8",
tokenizer_path="OuteAI/OuteTTS-1.0-0.6B",
backend=Backend.VLLM
# For EXL2, use backend=Backend.EXL2ASYNC + exl2_cache_seq_multiply={should be same as max_batch_size in GenerationConfig}
# For LLAMACPP_ASYNC_SERVER, use backend=Backend.LLAMACPP_ASYNC_SERVER and provide server_host in GenerationConfig
)
)
# Load your speaker profile
speaker = interface.load_default_speaker("EN-FEMALE-1-NEUTRAL") # Or load/create custom speaker
# Generate speech using BATCH type
# Note: For EXL2ASYNC, VLLM, LLAMACPP_ASYNC_SERVER, BATCH is automatically selected.
output = interface.generate(
GenerationConfig(
text="This is a longer text that will be automatically split into chunks and processed in batches.",
speaker=speaker,
generation_type=GenerationType.BATCH,
max_batch_size=32, # Adjust based on your GPU memory and server capacity
dac_decoding_chunk=2048, # Adjust chunk size for DAC decoding
# If using LLAMACPP_ASYNC_SERVER, add:
# server_host="http://localhost:8000" # Replace with your server address
)
)
# Save to file
output.save("output_batch.wav")
More Configuration Options
For advanced settings and customization, visit the official repository:
Multilingual Capabilities
Trained Languages: English, Chinese, Dutch, French, Georgian, German, Hungarian, Italian, Japanese, Korean, Latvian, Polish, Russian, Spanish
Beyond Supported Languages: The model can generate speech in untrained languages with varying success. Experiment with unlisted languages, though results may not be optimal.
Usage Recommendations
Speaker Reference
The model is designed to be used with a speaker reference. Without one, it generates random vocal characteristics, often leading to lower-quality outputs. The model inherits the referenced speaker's emotion, style, and accent. When transcribing to other languages with the same speaker, you may observe the model retaining the original accent.
Multilingual Application
It is recommended to create a speaker profile in the language you intend to use. This helps achieve the best results in that specific language, including tone, accent, and linguistic features.
While the model supports cross-lingual speech, it still relies on the reference speaker. If the speaker has a distinct accent—such as British English—other languages may carry that accent as well.
Optimal Audio Length
- Best Performance: Generate audio around 42 seconds in a single run (approximately 8,192 tokens). It is recomended not to near the limits of this windows when generating. Usually, the best results are up to 7,000 tokens.
- Context Reduction with Speaker Reference: If the speaker reference is 10 seconds long, the effective context is reduced to approximately 32 seconds.
Temperature Setting Recommendations
Testing shows that a temperature of 0.4 is an ideal starting point for accuracy (with the sampling settings below). However, some voice references may benefit from higher temperatures for enhanced expressiveness or slightly lower temperatures for more precise voice replication.
Verifying Speaker Encoding
If the cloned voice quality is subpar, check the encoded speaker sample.
interface.decode_and_save_speaker(speaker=your_speaker, path="speaker.wav")
The DAC audio reconstruction model is lossy, and samples with clipping, excessive loudness, or unusual vocal features may introduce encoding issues that impact output quality.
Sampling Configuration
For optimal results with this TTS model, use the following sampling settings.
Parameter | Value |
---|---|
Temperature | 0.4 |
Repetition Penalty | 1.1 |
Repetition Range | 64 |
Top-k | 40 |
Top-p | 0.9 |
Min-p | 0.05 |
📊 Model Specifications
Model | Training Data | Context Length | Supported Languages |
---|---|---|---|
Llama-OuteTTS-1.0-1B | 60k hours of audio | 8,192 tokens | 23+ languages |
OuteTTS-1.0-0.6B | 20k hours of audio | 8,192 tokens | 14+ languages |
Acknowledgments
- Audio encoding and decoding utilize ibm-research/DAC.speech.v1.0
- OuteTTS is built with Qwen3 0.6B as the base model, with continued pre-training and fine-tuning.
- Datasets used: Multilingual LibriSpeech (MLS) (CC BY 4.0), Common Voice Corpus (CC-0)
Ethical Use Guidelines
Intended Purpose: This model is intended for legitimate applications that enhance accessibility, creativity, and communication.
Prohibited Uses:
- Impersonation of individuals without their explicit, informed consent.
- Creation of deliberately misleading, false, or deceptive content (e.g., "deepfakes" for malicious purposes).
- Generation of harmful, hateful, harassing, or defamatory material.
- Voice cloning of any individual without their explicit prior permission.
- Any uses that violate applicable local, national, or international laws, regulations, or copyrights.
Responsibility: Users are responsible for the content they generate and how it is used. We encourage thoughtful consideration of the potential impact of synthetic media.
- Downloads last month
- 1,505