Daggr UI version of the Qwen3-TTS demo.π₯ (custom voice, voice design, qwen3-asr and voice cloning) nodes. No remote spaces used for API inference; all functions run in-app fn. Powered by t4-m and built with [email protected] and gradio@6.
π Geilim-1B-Instruct β Implicit Deep Reasoning, Zero Verbosity NoesisLab/Geilim-1B-Instruct https://huggingface.co/collections/NoesisLab/geilim-large-language-models No <think> tags. No long CoT. Reasoning happens inside the hidden states, not in the output. Whatβs different π§ Implicit reasoning: deep causal reasoning without exposing chains πΈοΈ ASPP (Adjacency-Structured Parallel Propagation): parent-only causal graph, O(n) message passing π Ο-flow: internal probability-space refinement instead of token-level deliberation βοΈ Hybrid gating: learns when to use structure vs attention Why it matters Lower latency & token cost Cleaner, production-ready outputs CoT-level reasoning depth without verbosity tax Built on Llama-3.2-1B-Instruct, trained for math, logic, and commonsense. Designed for small-model reasoning at the edge. #ImplicitReasoning #SmallLLM #EfficientAI #ReasoningModels #ASPP #PiFlow
KittenTTS Nano is a lightweight, CPU-only text-to-speech model designed to prove that natural, expressive voices donβt require massive cloud stacks or GPUs. At roughly ~15M parameters, it runs fast on modest hardware, supports multiple expressive voices, and exposes simple controls for pacing and tone. This makes it ideal for edge devices, demos, and anyone who wants full control over TTS without latency, lock-in, or infrastructure overhead.