🎡 Kokoro NPU Quantized TTS Models

🎡 NPU-Optimized Text-to-Speech Models

NPU-optimized text-to-speech models for AMD Ryzen AI hardware.

Models Included

  • kokoro-npu-quantized-int8.onnx (121.9 MB) - INT8 quantized for NPU
  • kokoro-npu-fp16.onnx (169.8 MB) - FP16 for quality
  • voices-v1.0.bin (26.9 MB) - Voice embeddings

Hardware Requirements

  • NPU: NPU Phoenix
  • Memory: 1GB+ available
  • Framework: Unicorn Execution Engine

Quick Start

from unicorn_execution_engine import UnicornTTS

# Initialize NPU-accelerated TTS
tts = UnicornTTS(model="kokoro-npu-quantized")

# Generate speech with NPU acceleration
audio = tts.synthesize("Hello, this is NPU-accelerated speech!")

🎡 NPU-Accelerated Text-to-Speech
⚑ Powered by Unicorn Execution Engine

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support