Update README.md
Browse files
README.md
CHANGED
@@ -16,4 +16,73 @@ tags:
|
|
16 |
- ai
|
17 |
- llm
|
18 |
- text
|
19 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
16 |
- ai
|
17 |
- llm
|
18 |
- text
|
19 |
+
---
|
20 |
+
# SymbioticLM-1B
|
21 |
+
|
22 |
+
**Author**: Roy S. Colca Jr.
|
23 |
+
**Model Type**: Hybrid Symbolic–Transformer
|
24 |
+
**Base Model**: Qwen-1B
|
25 |
+
**License**: MIT
|
26 |
+
**Framework**: PyTorch + HuggingFace Transformers
|
27 |
+
**Purpose**: Lightweight, memory-augmented reasoning model for CPU and embedded inference
|
28 |
+
|
29 |
+
---
|
30 |
+
|
31 |
+
## Overview
|
32 |
+
|
33 |
+
SymbioticLM-1B is the compact version of the SymbioticAI architecture. It fuses Qwen’s rotary transformer design with a symbolic processing pipeline and a persistent episodic memory. Though smaller in parameter count, it retains the full cognitive engine: symbolic memory, dynamic thought evolution, and entropy-gated control.
|
34 |
+
|
35 |
+
This model is ideal for symbolic reasoning in constrained environments — like research agents, lightweight assistants, and memory-efficient logical processing.
|
36 |
+
|
37 |
+
---
|
38 |
+
|
39 |
+
## Architecture Highlights
|
40 |
+
|
41 |
+
- **Backbone**: Qwen-1B rotary transformer
|
42 |
+
- **Symbolic Dim**: 1024
|
43 |
+
- **Symbolic Modules**:
|
44 |
+
- ThoughtDynamicsLNN
|
45 |
+
- CrystallineProcessor (DNAConv GNN)
|
46 |
+
- LiquidThoughtProcessor
|
47 |
+
- HelicalDNAProcessor
|
48 |
+
- **Memory**: 2048 symbolic vectors with entropic and contextual retrieval
|
49 |
+
- **Dream Mode**: Symbolic simulation with ThoughtGenerator
|
50 |
+
|
51 |
+
---
|
52 |
+
|
53 |
+
## Files Included
|
54 |
+
|
55 |
+
| File | Description |
|
56 |
+
|--------------------------|-------------------------------------------------------|
|
57 |
+
| `model.bin` | PyTorch model weights |
|
58 |
+
| `model.safetensors` | SafeTensor weights |
|
59 |
+
| `memory.pt` | Serialized symbolic memory vectors |
|
60 |
+
| `config.json` | Model architecture config |
|
61 |
+
| `generation_config.json` | Generation strategy configuration |
|
62 |
+
| `tokenizer.json` | Tokenizer including custom symbolic tags |
|
63 |
+
| `added_tokens.json` | Special tokens such as `<THM>`, `<LEM>`, `<D_IF>` |
|
64 |
+
| `special_tokens_map.json`| Tokenizer-to-logic mappings |
|
65 |
+
|
66 |
+
---
|
67 |
+
|
68 |
+
## Intended Uses
|
69 |
+
|
70 |
+
- CPU-optimized symbolic inference
|
71 |
+
- Educational agents with memory
|
72 |
+
- Graph-based explanation generation
|
73 |
+
- Procedural planning, math modeling, small-code generation
|
74 |
+
|
75 |
+
---
|
76 |
+
|
77 |
+
## Limitations
|
78 |
+
|
79 |
+
- Less fluent in free-form language than larger variants
|
80 |
+
- Symbolic accuracy increases with memory curation
|
81 |
+
- Dreaming requires warm-up or symbolic seeding for complex queries
|
82 |
+
|
83 |
+
---
|
84 |
+
|
85 |
+
## Citations
|
86 |
+
|
87 |
+
|
88 |
+
Symbolic components are rooted in cognitive modeling and discrepancy calculus research.
|