metadata
license: afl-3.0
datasets:
- 0xZee/dataset-CoT-Advanced-Calculus-268
language:
- en
base_model:
- Qwen/Qwen3-8B
pipeline_tag: text-generation
library_name: transformers
tags:
- qwen3
- 8b
- qwen3-8b
- symbiotic
- symbtioicai
SymbioticLM-8B
Model Type: Hybrid Symbolic–Transformer
Base Model: Qwen-8B
Framework: PyTorch + Transformers-compatible
Purpose: Long-memory symbolic reasoning + high-fidelity language generation
Overview
SymbioticLM-8B is a state-of-the-art hybrid transformer model with built-in symbolic cognition. It combines an 8B Qwen-based transformer with modular symbolic processors and a persistent memory buffer. The model supports both general conversation and deep symbolic tasks such as theorem generation, logical chaining, and structured reasoning with retained memory across turns.
Architecture Highlights
- Backbone: Qwen-8B rotary transformer
- Symbolic Dim: 4096
- Symbolic Modules:
- ThoughtDynamicsLNN (multi-head LSTM attention)
- CrystallineProcessor (DNAConv GNN)
- LiquidThoughtProcessor (recurrent symbol folding)
- HelicalDNAProcessor (helical linear projection)
- Memory: 2048 symbolic vectors (float32) with entropy-aware retrieval and contextual recall
- Dream Mode: Self-generates symbolic cognition offline
Files Included
File | Description |
---|---|
model.bin |
PyTorch weights (LFS tracked) |
model.safetensors |
Same weights in safetensors format (recommended) |
memory.pt |
Symbolic memory snapshot (entropic, pretrained) |
config.json |
Base model configuration |
generation_config.json |
Sampling and decoding config (temperature, top_p, etc.) |
tokenizer.json |
Tokenizer data with custom tags and structure |
added_tokens.json |
Extra tokens like <THM> , <PROOF> , <D_EPS> |
special_tokens_map.json |
Maps for special tokens used during generation |
Intended Uses
- General symbolic reasoning and logical conversation
- Memory-aware tutoring, research assistants
- Code + math proof modeling
- Context-persistent dialogue systems
Limitations
- Not instruction-tuned (e.g., chat-style inputs may require prompt engineering)
- Larger memory buffer may increase CPU load slightly
- Symbolic inference is offline-evolved; memory must be actively seeded
Citations
This model was designed and built from Discrepancy Analysis, paper to be published soon!