tle-orbit-explainer
A LoRA adapter for Qwen-1.5-7B that translates raw Two-Line Elements (TLEs) into natural-language orbit explanations, decay risk scores, and anomaly flags for general space awareness workflows.
Model Details
Model Description
Developed by | Jack Al-Kahwati / Stardrive |
Funded by | โฌ๏ธ (Self-funded) |
Shared by | jackal79 (Hugging Face) |
Model type | LoRA adapter (peft==0.10.0 ) |
Languages | English |
License | TLE-Orbit-NonCommercial v1.0 (custom terms) |
Finetuned from | Qwen/Qwen1.5-7B |
Model Sources
Uses
Direct Use
- Quick summarization of satellite orbital states for analysts
- Plain-language TLE explanations for educational purposes
- Offline dataset labeling (orbital classifications)
Downstream Use
- Combine with SGP4 for enhanced position forecasting
- Integration into satellite autonomy stacks (cubesats, small-scale hardware)
- Pre-prompted agent support in secure orbital management workflows
Out-of-Scope Use
- High-precision orbit propagation without additional physics modeling
- Applications related to targeting, weapons systems, or lethal autonomous decisions
- Jurisdictions prohibiting ML or data export (verify with ITAR/EAR guidelines)
Bias, Risks, & Limitations
Category | Note |
---|---|
Data bias | Trained primarily on decayed objects (DECAY = 1 ), possibly underestimating longevity for active satellites. |
Temporal limits | Operates on snapshot data; does not handle continuous high-frequency time-series. |
Language | Supports explanations in English only. |
Accuracy | Potential inaccuracies in decay date predictions; verify independently. |
Recommendations
Incorporate independent physics-based validation before operational use and maintain a human-in-the-loop for any critical or high-risk decisions.
How to Get Started
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
from peft import PeftModel
base = "Qwen/Qwen1.5-7B"
lora = "jackal79/tle-orbit-explainer"
tok = AutoTokenizer.from_pretrained(base)
model = AutoModelForCausalLM.from_pretrained(base, device_map="auto")
model = PeftModel.from_pretrained(model, lora) # merges LoRA
pipe = pipeline("text-generation", model=model, tokenizer=tok, device=0)
prompt = """### Prompt:
1 25544U 98067A 24079.07757601 .00016717 00000+0 10270-3 0 9994
2 25544 51.6400 337.6640 0007776 35.5310 330.5120 15.50377579499263
### Reasoning:
"""
print(pipe(prompt, max_new_tokens=120)[0]["generated_text"])
License
This model is released under the TLE-Orbit-NonCommercial License v1.0.
- โ Free for non-commercial use, research, and internal evaluation
- ๐ซ Commercial, operational, or for-profit use requires a separate license
To request a commercial license, contact: [email protected]
- Downloads last month
- 11
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for jackal79/tle-orbit-explainer
Base model
Qwen/Qwen1.5-7B