--- title: LLM Forest Orchestra emoji: 📈 colorFrom: green colorTo: green sdk: gradio sdk_version: 5.44.1 app_file: app.py pinned: true --- --- title: Locutusque Models emoji: 🌲 colorFrom: green colorTo: blue sdk: gradio sdk_version: 4.29.0 app_file: app.py pinned: false license: apache-2.0 --- # LLM Forest Orchestra — Hugging Face Space (Gradio) This Space turns transformer **hidden states** and **attentions** into a layered **MIDI composition**. ## Features - Two compute modes: - **Full model**: loads a base HF model (default: `unsloth/Qwen3-14B-Base`) and extracts internals. - **Mock latents**: fast CPU-friendly demo mode that synthesizes tensors to preview the music system. - Musical controls: scale selection (with custom notes), ticks-per-beat grid, velocity range, and instrument/role presets. - Exports a `.mid` file you can drop into any DAW. ## Files - `app.py` — Gradio app entry. - `requirements.txt` — Python deps. - `README.md` — You are here. ## Hardware / tips - `unsloth/Qwen3-14B-Base` is **large**. On CPU-only Spaces, use **Mock latents**. For Full model, pick a smaller base or provision a GPU Space. - Models must support `output_hidden_states=True` and `output_attentions=True`. - MIDI channels are limited to 16; the UI caps layers at 6 for headroom. ## Inspiration This project is inspired by the way **mushrooms and mycelial networks in forests** connect plants and trees, forming a living web of communication and resource sharing. These connections, can be turned into ethereal music. Just as signals move through these hidden connections, transformer models also pass hidden states and attentions across their layers. Here, those hidden connections are translated into **music**, analogous to the forest's secret orchestra.