tags:
- brain-inspired
- spiking-neural-network
- biologically-plausible
- modular-architecture
- reinforcement-learning
- vision-language
- pytorch
- curriculum-learning
- cognitive-architecture
- artificial-general-intelligence
license: mit
datasets:
- mnist
- imdb
- synthetic-environment
language:
- en
library_name: transformers
widget:
- text: >-
The first blueprint and the bridge to Neuroscience and Artificial
Intelligence.
- text: Iβm sure this model architecture will revolutionize the world.
model-index:
- name: ModularBrainAgent
results:
- task:
type: image-classification
name: Vision-based Classification
dataset:
type: mnist
name: MNIST
metrics:
- type: accuracy
value: 0.98
- task:
type: text-classification
name: Language Sentiment Analysis
dataset:
type: imdb
name: IMDb
metrics:
- type: accuracy
value: 0.91
- task:
type: reinforcement-learning
name: Curiosity-driven Exploration
dataset:
type: synthetic-environment
name: Synthetic Environment
metrics:
- type: cumulative_reward
value: 112.5
π§ ModularBrainAgent: A Brain-Inspired Cognitive AI Model
ModularBrainAgent (SynCo) is a biologically plausible, spiking neural agent combining vision, language, and reinforcement learning in a single architecture. Inspired by human neurobiology, it implements multiple neuron types and complex synaptic pathways, including excitatory, inhibitory, modulatory, bidirectional, feedback, lateral, and plastic connections.
Itβs designed for researchers, neuroscientists, and AI developers exploring the frontier between brain science and general intelligence.
π§© Model Architecture
- Total Neurons: 66
- Neuron Types: Interneurons, Excitatory, Inhibitory, Cholinergic, Dopaminergic, Serotonergic, Feedback, Plastic
- Core Modules:
SensoryEncoder
: Vision, Language, Numeric integrationPlasticLinear
: Hebbian and STDP local learningRelayLayer
: Spiking multi-head attention moduleAdaptiveLIF
: Recurrent interneuron logicWorkingMemory
: LSTM-based temporal memoryNeuroendocrineModulator
: Emotional feedbackPlaceGrid
: Spatial grid encodingComparator
: Self-matching logicTaskHeads
: Classification, regression, binary outputs
π§ Features
- πͺ Multi-modal input (images, text, numerics)
- π Hebbian + STDP local plasticity
- β‘ Spiking simulation via surrogate gradients
- π§ Biologically inspired synaptic dynamics
- 𧬠Curriculum and lifelong learning capability
- π Fully modular: plug-and-play cortical units
π Performance Summary
Note: Metrics shown below are for illustrative purposes from synthetic and internal tests.
Task | Dataset | Metric | Result |
---|---|---|---|
Digit Recognition | MNIST | Accuracy | 0.98 |
Sentiment Analysis | IMDb | Accuracy | 0.91 |
Exploration Task | Gridworld Simulation | Cumulative Reward | 112.5 |
π» Training Data
- MNIST: Handwritten digit classification
- IMDb: Sentiment classification from text
- Synthetic Environment: Grid-based exploration with feedback
π§ͺ Intended Uses
Use Case | Description |
---|---|
Neuroscience AI Research | Simulating cortical modules and spiking dynamics |
Cognitive Simulation | Experimenting with memory, attention, and decision systems |
Multi-task Agents | One-shot learning across vision + language + control |
Education + Demos | Accessible tool for learning about bio-inspired AI |
β οΈ Limitations
- Early-stage architecture (prototype stage)
- Unsupervised/local learning only (no gradient-based finetuning yet)
- Synthetic data only for now
- Accuracy and metrics not benchmarked on large-scale public sets
β¨ Credits
Built by Aliyu Lawan Halliru, an independent AI researcher from Nigeria.
SynCo was created to demonstrate that anyone, anywhere, can build synthetic intelligence.
π License
MIT License Β© 2025 Aliyu Lawan Halliru
Use freely. Cite or reference when possible.
.