Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -1,137 +1,163 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
|
|
|
|
|
|
|
|
|
|
3 |
language:
|
4 |
- en
|
5 |
-
|
6 |
pipeline_tag: text-to-video
|
7 |
-
tags:
|
8 |
-
- zen
|
9 |
-
- hanzo-ai
|
10 |
-
- video-generation
|
11 |
-
- text-to-video
|
12 |
-
- image-to-video
|
13 |
-
- wan2.2
|
14 |
-
- diffusion
|
15 |
-
base_model: Wan-AI/Wan2.2-TI2V-5B
|
16 |
---
|
17 |
|
18 |
-
#
|
19 |
|
20 |
-
|
21 |
|
22 |
-
##
|
23 |
|
24 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
25 |
|
26 |
-
|
27 |
|
28 |
-
|
29 |
|
30 |
-
|
31 |
-
- **Image-to-Video**: Animate static images into videos
|
32 |
-
- **High Resolution**: Supports high-quality video generation
|
33 |
-
- **Efficient**: Optimized MoE architecture for fast inference
|
34 |
|
35 |
-
|
|
|
|
|
|
|
|
|
36 |
|
37 |
-
|
38 |
-
-
|
39 |
-
-
|
40 |
-
-
|
41 |
-
- **Frame Rate**: 24 FPS
|
42 |
-
- **Duration**: Up to 5 seconds
|
43 |
|
44 |
-
|
|
|
|
|
|
|
45 |
|
46 |
-
|
47 |
-
|
48 |
-
|
49 |
-
|
|
|
|
|
|
|
50 |
|
51 |
## Usage
|
52 |
|
53 |
-
###
|
54 |
|
55 |
```python
|
56 |
-
from
|
57 |
-
import torch
|
58 |
|
59 |
-
|
60 |
-
|
61 |
-
"zenlm/zen-director",
|
62 |
-
torch_dtype=torch.float16
|
63 |
-
)
|
64 |
-
pipe = pipe.to("cuda")
|
65 |
|
66 |
-
|
67 |
-
prompt = "A serene sunset over a calm ocean with waves gently lapping at the shore"
|
68 |
-
video = pipe(prompt, num_frames=120, height=720, width=1280).frames
|
69 |
|
70 |
-
|
71 |
-
|
72 |
-
|
|
|
|
|
|
|
|
|
|
|
73 |
```
|
74 |
|
75 |
-
###
|
|
|
|
|
|
|
|
|
|
|
76 |
|
77 |
```python
|
78 |
-
|
|
|
79 |
|
80 |
-
|
81 |
-
|
|
|
|
|
|
|
|
|
82 |
|
83 |
-
|
84 |
-
video = pipe(
|
85 |
-
prompt="Animate this image with gentle camera movement",
|
86 |
-
image=image,
|
87 |
-
num_frames=120
|
88 |
-
).frames
|
89 |
|
90 |
-
|
91 |
-
```
|
92 |
|
93 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
94 |
|
95 |
-
|
96 |
-
- **Memory**: Requires 24GB+ VRAM for full resolution
|
97 |
-
- **Quantization**: FP16 recommended for consumer GPUs
|
98 |
|
99 |
-
##
|
100 |
|
101 |
-
-
|
102 |
-
-
|
103 |
-
-
|
|
|
104 |
|
105 |
-
##
|
106 |
|
107 |
-
-
|
108 |
-
-
|
109 |
-
-
|
110 |
-
-
|
|
|
111 |
|
112 |
## Citation
|
113 |
|
114 |
```bibtex
|
115 |
-
@misc{
|
116 |
-
title={
|
117 |
-
author={
|
118 |
year={2025},
|
119 |
-
publisher={
|
120 |
-
|
121 |
-
}
|
122 |
-
|
123 |
-
@article{wan2024,
|
124 |
-
title={Wan 2.2: High-Quality Video Generation},
|
125 |
-
author={Wan-AI Team},
|
126 |
-
journal={arXiv preprint},
|
127 |
-
year={2024}
|
128 |
}
|
129 |
```
|
130 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
131 |
## License
|
132 |
|
133 |
-
Apache 2.0
|
|
|
|
|
134 |
|
135 |
---
|
136 |
|
137 |
-
**
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
+
tags:
|
4 |
+
- zen-research
|
5 |
+
- zen-ai
|
6 |
+
- hypermodal
|
7 |
+
- text-to-video
|
8 |
language:
|
9 |
- en
|
10 |
+
library_name: transformers
|
11 |
pipeline_tag: text-to-video
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
12 |
---
|
13 |
|
14 |
+
# zen-director
|
15 |
|
16 |
+
5B parameter text/image-to-video generation model for professional video synthesis
|
17 |
|
18 |
+
## Model Details
|
19 |
|
20 |
+
- **Developed by**: Zen Research Authors
|
21 |
+
- **Organization**: Zen Research DAO under [Zoo Labs Inc](https://github.com/zenlm) (501(c)(3) Non-Profit)
|
22 |
+
- **Location**: San Francisco, California, USA
|
23 |
+
- **Model type**: text-to-video
|
24 |
+
- **Architecture**: Diffusion Transformer (5B)
|
25 |
+
- **Parameters**: 5B
|
26 |
+
- **License**: Apache 2.0
|
27 |
+
- **Training**: Trained with [Zen Gym](https://github.com/zenlm/zen-gym)
|
28 |
+
- **Inference**: Optimized for [Zen Engine](https://github.com/zenlm/zen-engine)
|
29 |
|
30 |
+
## 🌟 Zen AI Ecosystem
|
31 |
|
32 |
+
This model is part of the **Zen Research** hypermodal AI family - the world's most comprehensive open-source AI ecosystem.
|
33 |
|
34 |
+
### Complete Model Family
|
|
|
|
|
|
|
35 |
|
36 |
+
**Language Models:**
|
37 |
+
- [zen-nano-0.6b](https://huggingface.co/zenlm/zen-nano-0.6b) - 0.6B edge model (44K tokens/sec)
|
38 |
+
- [zen-eco-4b-instruct](https://huggingface.co/zenlm/zen-eco-4b-instruct) - 4B instruction model
|
39 |
+
- [zen-eco-4b-thinking](https://huggingface.co/zenlm/zen-eco-4b-thinking) - 4B reasoning model
|
40 |
+
- [zen-agent-4b](https://huggingface.co/zenlm/zen-agent-4b) - 4B tool-calling agent
|
41 |
|
42 |
+
**3D & World Generation:**
|
43 |
+
- [zen-3d](https://huggingface.co/zenlm/zen-3d) - Controllable 3D asset generation
|
44 |
+
- [zen-voyager](https://huggingface.co/zenlm/zen-voyager) - Camera-controlled world exploration
|
45 |
+
- [zen-world](https://huggingface.co/zenlm/zen-world) - Large-scale world simulation
|
|
|
|
|
46 |
|
47 |
+
**Video Generation:**
|
48 |
+
- [zen-director](https://huggingface.co/zenlm/zen-director) - Text/image-to-video (5B)
|
49 |
+
- [zen-video](https://huggingface.co/zenlm/zen-video) - Professional video synthesis
|
50 |
+
- [zen-video-i2v](https://huggingface.co/zenlm/zen-video-i2v) - Image-to-video animation
|
51 |
|
52 |
+
**Audio Generation:**
|
53 |
+
- [zen-musician](https://huggingface.co/zenlm/zen-musician) - Music generation (7B)
|
54 |
+
- [zen-foley](https://huggingface.co/zenlm/zen-foley) - Video-to-audio Foley effects
|
55 |
+
|
56 |
+
**Infrastructure:**
|
57 |
+
- [Zen Gym](https://github.com/zenlm/zen-gym) - Unified training platform
|
58 |
+
- [Zen Engine](https://github.com/zenlm/zen-engine) - High-performance inference
|
59 |
|
60 |
## Usage
|
61 |
|
62 |
+
### Quick Start
|
63 |
|
64 |
```python
|
65 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
|
|
66 |
|
67 |
+
model = AutoModelForCausalLM.from_pretrained("zenlm/zen-director")
|
68 |
+
tokenizer = AutoTokenizer.from_pretrained("zenlm/zen-director")
|
|
|
|
|
|
|
|
|
69 |
|
70 |
+
from zen_director import ZenDirectorPipeline
|
|
|
|
|
71 |
|
72 |
+
pipeline = ZenDirectorPipeline.from_pretrained("zenlm/zen-director")
|
73 |
+
video = pipeline(
|
74 |
+
prompt="A cinematic shot of a sunset over mountains",
|
75 |
+
num_frames=120,
|
76 |
+
fps=24,
|
77 |
+
resolution=(1280, 720)
|
78 |
+
)
|
79 |
+
video.save("output.mp4")
|
80 |
```
|
81 |
|
82 |
+
### With Zen Engine
|
83 |
+
|
84 |
+
```bash
|
85 |
+
# High-performance inference (44K tokens/sec on M3 Max)
|
86 |
+
zen-engine serve --model zenlm/zen-director --port 3690
|
87 |
+
```
|
88 |
|
89 |
```python
|
90 |
+
# OpenAI-compatible API
|
91 |
+
from openai import OpenAI
|
92 |
|
93 |
+
client = OpenAI(base_url="http://localhost:3690/v1")
|
94 |
+
response = client.chat.completions.create(
|
95 |
+
model="zenlm/zen-director",
|
96 |
+
messages=[{"role": "user", "content": "Hello!"}]
|
97 |
+
)
|
98 |
+
```
|
99 |
|
100 |
+
## Training
|
|
|
|
|
|
|
|
|
|
|
101 |
|
102 |
+
Fine-tune with [Zen Gym](https://github.com/zenlm/zen-gym):
|
|
|
103 |
|
104 |
+
```bash
|
105 |
+
git clone https://github.com/zenlm/zen-gym
|
106 |
+
cd zen-gym
|
107 |
+
|
108 |
+
# LoRA fine-tuning
|
109 |
+
llamafactory-cli train --config configs/zen_lora.yaml \
|
110 |
+
--model_name_or_path zenlm/zen-director
|
111 |
+
|
112 |
+
# GRPO reinforcement learning (40-60% memory reduction)
|
113 |
+
llamafactory-cli train --config configs/zen_grpo.yaml \
|
114 |
+
--model_name_or_path zenlm/zen-director
|
115 |
+
```
|
116 |
|
117 |
+
Supported methods: LoRA, QLoRA, DoRA, GRPO, GSPO, DPO, PPO, KTO, ORPO, SimPO, Unsloth
|
|
|
|
|
118 |
|
119 |
+
## Performance
|
120 |
|
121 |
+
- **Speed**: ~60s for 5-second video (RTX 4090)
|
122 |
+
- **Resolution**: Up to 1280x720, 24 FPS
|
123 |
+
- **Duration**: Up to 10 seconds
|
124 |
+
- **Quality**: Professional-grade video synthesis
|
125 |
|
126 |
+
## Ethical Considerations
|
127 |
|
128 |
+
- **Open Research**: Released under Apache 2.0 for maximum accessibility
|
129 |
+
- **Environmental Impact**: Optimized for eco-friendly deployment
|
130 |
+
- **Transparency**: Full training details and model architecture disclosed
|
131 |
+
- **Safety**: Comprehensive testing and evaluation
|
132 |
+
- **Non-Profit**: Developed by Zoo Labs Inc (501(c)(3)) for public benefit
|
133 |
|
134 |
## Citation
|
135 |
|
136 |
```bibtex
|
137 |
+
@misc{zenzendirector2025,
|
138 |
+
title={zen-director: 5B parameter text/image-to-video generation model for professional video synthes},
|
139 |
+
author={Zen Research Authors},
|
140 |
year={2025},
|
141 |
+
publisher={Zoo Labs Inc},
|
142 |
+
organization={Zen Research DAO},
|
143 |
+
url={https://huggingface.co/zenlm/zen-director}
|
|
|
|
|
|
|
|
|
|
|
|
|
144 |
}
|
145 |
```
|
146 |
|
147 |
+
## Links
|
148 |
+
|
149 |
+
- **Organization**: [github.com/zenlm](https://github.com/zenlm) • [huggingface.co/zenlm](https://huggingface.co/zenlm)
|
150 |
+
- **Training Platform**: [Zen Gym](https://github.com/zenlm/zen-gym)
|
151 |
+
- **Inference Engine**: [Zen Engine](https://github.com/zenlm/zen-engine)
|
152 |
+
- **Parent Org**: [Zoo Labs Inc](https://github.com/zenlm) (501(c)(3) Non-Profit, San Francisco)
|
153 |
+
- **Contact**: [email protected] • +1 (913) 777-4443
|
154 |
+
|
155 |
## License
|
156 |
|
157 |
+
Apache License 2.0
|
158 |
+
|
159 |
+
Copyright 2025 Zen Research Authors
|
160 |
|
161 |
---
|
162 |
|
163 |
+
**Zen Research** - Building open, eco-friendly AI for everyone 🌱
|