GeoDavidCollective Enhanced - ProjectiveHead Architecture
Highly experimental behavioral junctioning system that likely will fall apart at the drop of a hat.
This version is going to be renamed soon; I've dubbed her... Zephyr. She's a complex one and deserves a proper name for what I believe she brings to the table. The echoes of the Beatrix interpolator cut through this one, so lets see if we can recreate the behavior of SD15 in it's entirety soon.
I will be spending a piece of time making sure the configuration is lined up and configurable reasonably.
This will enable addition and removal of formula and noise complexity with additional controllers for simplification. This one got a bit bloated so lets see what's really needed and what's not shall we?
Cantor Steps are currently free-floating based on the math and the system definitely showed some interesting elemental response to it, but they definitely need to be fixated and hyper-focused on the positioning offset so I'll be running some smaller tests today for solidity. Be aware if you see this repo going wild that it might have some useful stuff it might have flat stuff.
The step improvements will likely include the BeatrixStaircase which is considerably more robust for learning features and far more advanced with better caching support and further math optimizations passing more meaning to torch.
π― Model Overview
GeoDavidCollective Enhanced is a sophisticated multi-expert geometric classification system that learns from Stable Diffusion 1.5's internal representations. Using ProjectiveHead architecture with Cayley-Menger geometry, it achieves efficient pattern recognition across timestep and semantic spaces.
Key Features
- ProjectiveHead Multi-Expert Architecture: Auto-configured expert systems per block
- Geometric Loss Functions: Rose, Cayley-Menger, and Cantor coherence losses
- 9-Block Processing: Full SD1.5 UNet feature extraction (down, mid, up)
- Compact Yet Powerful: 884,327,310 parameters
- 100 Timestep Bins x 10 Patterns = 1000 semantic-temporal classes
π Model Statistics
- Parameters: 884,327,310
- Trained Epochs: 10
- Base Model: Stable Diffusion 1.5
- Dataset Size: 10,000 synthetic prompts
- Training Date: 2025-10-28
ποΈ Architecture Details
Block Configuration
Down Blocks:
- down_0: 320 β 128 (3 experts, 3 gates)
- down_1: 640 β 192 (3 experts, 3 gates)
- down_2: 1280 β 256 (3 experts, 3 gates)
- down_3: 1280 β 256 (3 experts, 3 gates)
Mid Block (Highest Capacity):
- mid: 1280 β 256 (4 experts, 4 gates)
Up Blocks:
- up_0: 1280 β 256 (3 experts, 3 gates)
- up_1: 1280 β 256 (3 experts, 3 gates)
- up_2: 640 β 192 (3 experts, 3 gates)
- up_3: 320 β 128 (3 experts, 3 gates)
Loss Components
| Component | Weight | Purpose |
|---|---|---|
| Feature Similarity | 0.40 | Alignment with SD1.5 features |
| Rose Loss | 0.25 | Geometric pattern emergence |
| Cross-Entropy | 0.15 | Classification accuracy |
| Cayley-Menger | 0.10 | 5D geometric structure |
| Pattern Diversity | 0.05 | Prevent mode collapse |
| Cantor Coherence | 0.05 | Temporal consistency |
π» Usage
from geovocab2.train.model.core.geo_david_collective import GeoDavidCollective
from safetensors.torch import load_file
import torch
# Load model
state_dict = load_file("model.safetensors")
collective = GeoDavidCollective(
block_configs={...}, # See config.json
num_timestep_bins=100,
num_patterns_per_bin=10
)
collective.load_state_dict(state_dict)
collective.eval()
# Extract features from SD1.5 and classify
with torch.no_grad():
results = collective(features_dict, timesteps)
predictions = results['predictions'] # Timestep + pattern class
π¬ Training Details
- Optimizer: AdamW (lr=1e-3, weight_decay=0.001)
- Batch Size: 16
- Data: Symbolic prompt synthesis (complexity 1-5)
- Feature Extraction: SD1.5 UNet blocks (spatial, not pooled)
- Pool Mode: Mean spatial pooling
π Training Metrics
Final metrics from epoch 10:
- Cayley Loss: 0.1039
- Timestep Accuracy: 32.99%
- Pattern Accuracy: 27.24%
- Full Accuracy: 15.10%
π Research Context
This model is part of the geometric deep learning research exploring:
- 5D simplex-based neural representations (pentachora)
- Geometric alternatives to traditional transformers
- Consciousness-informed AI architectures
- Universal mathematical principles in neural networks
π¦ Files Included
model.safetensors- Model weights (3.3GB)config.json- Complete architecture configurationtraining_history.json- Full training metricsprompts_enhanced.jsonl- All training prompts with metadatatensorboard/- TensorBoard logs (optional)
π Related Work
π License
MIT License - Free for research and commercial use
π Acknowledgments
Built with:
- PyTorch & Diffusers
- Stable Diffusion 1.5 (Runway ML)
- Geometric algebra principles from the 1800s
- Dream-inspired mathematical insights
π€ Author
AbstractPhil - AI Researcher specializing in geometric deep learning
"Working with universal mathematical principles, not against them"
For questions, issues, or collaborations: GitHub | HuggingFace
- Downloads last month
- 48