πŸ‰ LongGuang-A1-7B

7 B bilingual causal LM with built-in causal discovery, metacognition & long-term memory.
Maintained by Golden Loong AS
[Apache 2.0] – free for commercial use.


πŸ”§ Quick Facts

Item Value
Parameters ~7 B
Context Length 32 k
Vocab 64 k bilingual BPE
Languages Chinese & English
License Apache 2.0

πŸ“¦ Files

File Purpose
config.json Public hyper-parameters
vocab.json + merges.json BPE tokenizer
pytorch_model.bin Model weights (Git-LFS)
examples/ Usage & fine-tune scripts
requirements.txt Verified dependencies

πŸš€ Usage

  1. Install
pip install -r requirements.txt
  1. Load
from transformers import AutoTokenizer, AutoModelForCausalLM
tok = AutoTokenizer.from_pretrained("GoldenLoong/LongGuang-A1-7B")
model = AutoModelForCausalLM.from_pretrained("GoldenLoong/LongGuang-A1-7B",
                                             torch_dtype="auto",
                                             device_map="auto")
  1. Chat
agent = CausalAgent(model, tok)  # provided in examples/
print(agent("δΈΊδ»€δΉˆε€©η©Ίζ˜―θ“θ‰²ηš„οΌŸ"))

🧠 Built-in Skills

  • Causal Q&A – β€œδΈΊδ»€δΉˆβ€¦ε―Όθ‡΄β€¦β€
  • Counterfactual – β€œε¦‚ζžœβ€¦δΌšζ€Žζ ·β€
  • Memory Recall – remembers prior turns
  • Live Search – optional Bing integration
  • Code Runner – inline Python/JS sandbox

πŸ›‘οΈ Security Notice

  • Only public config & tokenizer are open.
  • Internal architecture & training code remain confidential.
  • Do not redistribute decrypted weights.
  • Please specify the author and copyright owner before use.

πŸ™‹ Citation

@misc{longguang-a1-7b,
  title={LongGuang-A1-7B: A Bilingual Causal Language Model},
  author={Golden Loong AS},
  year={2025},
  url={https://huggingface.co/GoldenLoong/LongGuang-A1-7B},
  license={Apache-2.0}
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support