🔷 SymbioticAI: A Collection of Symbolic-Transformers for Th
Collection
SymbioticAI is a next-generation family of language models.
•
4 items
•
Updated
•
2
Model Type: Hybrid Symbolic–Transformer
Base Model: Qwen-8B
Framework: PyTorch + Transformers-compatible
Purpose: Long-memory symbolic reasoning + high-fidelity language generation
SymbioticLM-8B is a state-of-the-art hybrid transformer model with built-in symbolic cognition. It combines an 8B Qwen-based transformer with modular symbolic processors and a persistent memory buffer. The model supports both general conversation and deep symbolic tasks such as theorem generation, logical chaining, and structured reasoning with retained memory across turns.
File | Description |
---|---|
model.bin |
PyTorch weights (LFS tracked) |
model.safetensors |
Same weights in safetensors format (recommended) |
memory.pt |
Symbolic memory snapshot (entropic, pretrained) |
config.json |
Base model configuration |
generation_config.json |
Sampling and decoding config (temperature, top_p, etc.) |
tokenizer.json |
Tokenizer data with custom tags and structure |
added_tokens.json |
Extra tokens like <THM> , <PROOF> , <D_EPS> |
special_tokens_map.json |
Maps for special tokens used during generation |
This model was designed and built from Discrepancy Analysis, paper to be published soon!