π LongGuang-A1-7B
7 B bilingual causal LM with built-in causal discovery, metacognition & long-term memory.
Maintained by Golden Loong AS
[Apache 2.0] β free for commercial use.
π§ Quick Facts
Item | Value |
---|---|
Parameters | ~7 B |
Context Length | 32 k |
Vocab | 64 k bilingual BPE |
Languages | Chinese & English |
License | Apache 2.0 |
π¦ Files
File | Purpose |
---|---|
config.json |
Public hyper-parameters |
vocab.json + merges.json |
BPE tokenizer |
pytorch_model.bin |
Model weights (Git-LFS) |
examples/ |
Usage & fine-tune scripts |
requirements.txt |
Verified dependencies |
π Usage
- Install
pip install -r requirements.txt
- Load
from transformers import AutoTokenizer, AutoModelForCausalLM
tok = AutoTokenizer.from_pretrained("GoldenLoong/LongGuang-A1-7B")
model = AutoModelForCausalLM.from_pretrained("GoldenLoong/LongGuang-A1-7B",
torch_dtype="auto",
device_map="auto")
- Chat
agent = CausalAgent(model, tok) # provided in examples/
print(agent("δΈΊδ»δΉε€©η©Ίζ―θθ²ηοΌ"))
π§ Built-in Skills
- Causal Q&A β βδΈΊδ»δΉβ¦ε―Όθ΄β¦β
- Counterfactual β βε¦ζβ¦δΌζζ ·β
- Memory Recall β remembers prior turns
- Live Search β optional Bing integration
- Code Runner β inline Python/JS sandbox
π‘οΈ Security Notice
- Only public config & tokenizer are open.
- Internal architecture & training code remain confidential.
- Do not redistribute decrypted weights.
- Please specify the author and copyright owner before use.
π Citation
@misc{longguang-a1-7b,
title={LongGuang-A1-7B: A Bilingual Causal Language Model},
author={Golden Loong AS},
year={2025},
url={https://huggingface.co/GoldenLoong/LongGuang-A1-7B},
license={Apache-2.0}
}
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support