Miko X Tweet Ensemble โ multi-base, router-driven LoRA stack
This model has been trained using Miko, the fully autonomous AI agent for Miko Protocol.

What it is. Miko is a multi-base, multi-adapter ensemble built for X/Twitter. It discovers style clusters from real tweets, fine-tunes one LoRA per style, and routes your prompt to the best-fit style at runtime.
Why itโs different
Multi-base adapters by design. Not tied to a single model family. Style adapters originate from multiple bases:
Qwen/Qwen3-14B
X-native behavior. Short form, emoji/hashtag cadence, memes/irony, and fast โCTโ tone.
Router that understands styles. Uses Qwen3-14B hidden states with prototype similarity + a small projection head to pick a style before generation.
Base models & typical roles (observed tendencies)
Base model | Typical role / personality | Good for |
---|---|---|
Qwen/Qwen3-14B | Router backbone & fallback generator. Balanced, hashtag-friendly. | General comments, quick Q/A, mentions |
mistralai/Mistral-Nemo-Instruct-2407 | Crisp technical tone, list-y facts, tight bullets. | Alpha/launch notes, โ3-pointโ updates |
google/gemma-2-9b-it | Smooth and narrative; softer, reflective voice. | Story-like replies, mini-threads |
meta-llama/Meta-Llama-3.1-8B-Instruct | Clear directives / neutral composition. | How-to tweets, best practices |
microsoft/Phi-3.5-mini-instruct | Snappy one-liners; memes/emoji friendly. | Witty hooks, irony, punchy replies |
(Roles are tendencies learned from tweet data; theyโre not hard rules.)
Training data
Proprietary โ Miko Agent Tweet Corpus. Tweets authored by the fully-autonomous X (Twitter) agent Miko(@project_miko), collected from the live accountโs public timeline and agent logs under the account owner's control. โ Domain: Crypto/X discourse (emojis, hashtags, memes, irony) โ Time window: rolling weekly refreshes (e.g., 7โ14 days) โ Redistribution: the raw dataset is not redistributed; only model weights are shared. (Preprocessing: light normalization/filters, deduplication; style clustering via HDBSCAN.)
How it works (high-level)
- Style discovery โ cluster tweet embeddings (e.g., HDBSCAN) to assign style IDs.
- Per-style LoRA โ train one adapter per style, possibly from different base models.
- Routing โ Qwen3-14B features โ prototype similarity + projection head โ pick a style.
- Generation โ load the chosen base, attach the matching LoRA, generate with a light
<style_{id}>
tag.
Quickstart
from inference import MikoEnsemble
ens = MikoEnsemble(".")
print(ens.generate("CT keeps fading this rally. What's your take?"))
Force a style (advanced)
def generate_with_style(ens, sid, prompt, **gen):
styled = f"<style_{sid}>{prompt}"
model, tok = ens._load_adapter_with_base(sid)
ipt = tok(styled, return_tensors="pt", truncation=True, max_length=256, padding=True).to(model.device)
out = model.generate(
**ipt,
max_new_tokens=gen.get("max_new_tokens", 120),
temperature=gen.get("temperature", 0.8),
do_sample=True,
top_p=0.95,
pad_token_id=tok.pad_token_id,
eos_token_id=tok.eos_token_id,
)
return tok.decode(out[0], skip_special_tokens=True).replace(styled, "").strip()
VRAM & speed tips
- 4-bit (nf4, double-quant, bf16 compute) supported; 16โ24GB VRAM is enough for one adapter at a time.
- A small LRU cache keeps recently used styles in memory (default 2).
Files
lora_adapters/style_{id}_lora/
โ per-style LoRA folder (with its adapter_config.json).router/router_state.pt
โ router head (prototypes + projection).inference.py
โ lazy loader + generator.README_METADATA.json
โ style IDs, number of styles, base list, timestamp.
Intended use (tweet personas)
- Witty/ironic one-liners โ hooks, memes, playful replies
- Tech/alpha notes โ launch takeaways, bullet summaries, link threads
- Narrative reframing โ bullish/bearish angles, story-style posts
- Q&A / reply bots โ short, clear responses in mentions/threads
Limitation
- Optimized for tweets & short threads; not a general chatbot.
- Each base model retains its own license/terms.
License
Apache-2.0.
Acknowledgements
Thanks to the Qwen, Mistral, Gemma-2, Llama-3.1, and Phi-3.5 communities.
Changelog
- 2025-09-25: weekly refresh (days=7); retrained adapters & router.
- Downloads last month
- -