SymbioticLM-8B

Model Type: Hybrid Symbolic–Transformer
Base Model: Qwen-8B
Framework: PyTorch + Transformers-compatible
Purpose: Long-memory symbolic reasoning + high-fidelity language generation


Overview

SymbioticLM-8B is a state-of-the-art hybrid transformer model with built-in symbolic cognition. It combines an 8B Qwen-based transformer with modular symbolic processors and a persistent memory buffer. The model supports both general conversation and deep symbolic tasks such as theorem generation, logical chaining, and structured reasoning with retained memory across turns.


Architecture Highlights

  • Backbone: Qwen-8B rotary transformer
  • Symbolic Dim: 4096
  • Symbolic Modules:
    • ThoughtDynamicsLNN (multi-head LSTM attention)
    • CrystallineProcessor (DNAConv GNN)
    • LiquidThoughtProcessor (recurrent symbol folding)
    • HelicalDNAProcessor (helical linear projection)
  • Memory: 2048 symbolic vectors (float32) with entropy-aware retrieval and contextual recall
  • Dream Mode: Self-generates symbolic cognition offline

Files Included

File Description
model.bin PyTorch weights (LFS tracked)
model.safetensors Same weights in safetensors format (recommended)
memory.pt Symbolic memory snapshot (entropic, pretrained)
config.json Base model configuration
generation_config.json Sampling and decoding config (temperature, top_p, etc.)
tokenizer.json Tokenizer data with custom tags and structure
added_tokens.json Extra tokens like <THM>, <PROOF>, <D_EPS>
special_tokens_map.json Maps for special tokens used during generation

Intended Uses

  • General symbolic reasoning and logical conversation
  • Memory-aware tutoring, research assistants
  • Code + math proof modeling
  • Context-persistent dialogue systems

Limitations

  • Not instruction-tuned (e.g., chat-style inputs may require prompt engineering)
  • Larger memory buffer may increase CPU load slightly
  • Symbolic inference is offline-evolved; memory must be actively seeded

Citations

This model was designed and built from Discrepancy Analysis, paper to be published soon!

Downloads last month
7
Safetensors
Model size
8.19B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for reaperdoesntknow/Symbiotic-8B

Base model

Qwen/Qwen3-8B-Base
Finetuned
Qwen/Qwen3-8B
Finetuned
(30)
this model
Quantizations
2 models

Dataset used to train reaperdoesntknow/Symbiotic-8B

Collection including reaperdoesntknow/Symbiotic-8B