| # Installation Guide - Fashion Inpainting System | |
| ## System Requirements | |
| ### Hardware Requirements | |
| **Minimum Configuration:** | |
| - **GPU**: 8GB VRAM (RTX 3070, RTX 4060 Ti, or equivalent) | |
| - **RAM**: 16GB system memory | |
| - **Storage**: 20GB free space (for models and cache) | |
| - **OS**: Windows 10/11, Ubuntu 18.04+, macOS 10.15+ | |
| **Recommended Configuration:** | |
| - **GPU**: 12GB+ VRAM (RTX 3080, RTX 4070, RTX 4080, or better) | |
| - **RAM**: 32GB system memory | |
| - **Storage**: SSD with 30GB+ free space | |
| - **OS**: Ubuntu 20.04+ or Windows 11 | |
| ### Software Prerequisites | |
| - **Python**: 3.8, 3.9, 3.10, or 3.11 | |
| - **CUDA**: 11.7 or 12.1 (for GPU acceleration) | |
| - **Git**: For repository management | |
| - **Git LFS**: For large model files | |
| ## Installation Methods | |
| ### Method 1: Quick Install (Recommended) | |
| #### Step 1: Clone Repository | |
| ```bash | |
| # Clone the repository | |
| git clone https://huggingface.co/mlworks90/fashion-inpainting-system | |
| cd fashion-inpainting-system | |
| # Ensure Git LFS is initialized | |
| git lfs install | |
| git lfs pull | |
| ``` | |
| #### Step 2: Create Python Environment | |
| ```bash | |
| # Using conda (recommended) | |
| conda create -n fashion-inpainting python=3.10 | |
| conda activate fashion-inpainting | |
| # Or using venv | |
| python -m venv fashion-inpainting-env | |
| source fashion-inpainting-env/bin/activate # Linux/Mac | |
| # fashion-inpainting-env\Scripts\activate # Windows | |
| ``` | |
| #### Step 3: Install Dependencies | |
| ```bash | |
| # Install PyTorch with CUDA support first | |
| pip install torch torchvision --index-url https://download.pytorch.org/whl/cu118 | |
| # Install remaining dependencies | |
| pip install -r requirements.txt | |
| # Optional: Install xformers for memory efficiency | |
| pip install xformers | |
| ``` | |
| #### Step 4: Verify Installation | |
| ```bash | |
| python -c "import torch; print(f'PyTorch: {torch.__version__}'); print(f'CUDA available: {torch.cuda.is_available()}')" | |
| python -c "from controlnet_aux import OpenposeDetector; print('OpenPose: OK')" | |
| python -c "from diffusers import StableDiffusionControlNetInpaintPipeline; print('Diffusers: OK')" | |
| ``` | |
| ### Method 2: Development Install | |
| For contributors and developers who want to modify the system: | |
| ```bash | |
| # Clone with development tools | |
| git clone https://huggingface.co/mlworks90/fashion-inpainting-system | |
| cd fashion-inpainting-system | |
| # Install in development mode | |
| pip install -e . | |
| # Install development dependencies | |
| pip install -r requirements-dev.txt | |
| # Install pre-commit hooks | |
| pre-commit install | |
| ``` | |
| ### Method 3: Docker Install | |
| For containerized deployment: | |
| ```bash | |
| # Build Docker image | |
| docker build -t fashion-inpainting . | |
| # Run with GPU support | |
| docker run --gpus all -p 7860:7860 fashion-inpainting | |
| ``` | |
| ## Configuration | |
| ### Model Download | |
| On first run, the system will automatically download required models: | |
| ```python | |
| from src.fashion_inpainting import FashionInpaintingSystem | |
| # This will download models automatically (~10GB) | |
| system = FashionInpaintingSystem() | |
| ``` | |
| **Downloaded models include:** | |
| - Stable Diffusion 1.5 base model (~4GB) | |
| - ControlNet OpenPose model (~1.4GB) | |
| - OpenPose detector weights (~200MB) | |
| - VAE model (~300MB) | |
| ### Custom Model Directory | |
| ```bash | |
| # Set custom model cache directory | |
| export HF_HOME="/path/to/your/model/cache" | |
| # Or set in Python | |
| import os | |
| os.environ['HF_HOME'] = '/path/to/your/model/cache' | |
| ``` | |
| ### Safety Level Configuration | |
| ```python | |
| # Configure default safety level | |
| system = FashionInpaintingSystem( | |
| safety_level='fashion_moderate', # Recommended default | |
| device='cuda' | |
| ) | |
| ``` | |
| ## Troubleshooting | |
| ### Common Issues | |
| #### Issue 1: CUDA Out of Memory | |
| ```bash | |
| # Symptoms | |
| RuntimeError: CUDA out of memory | |
| # Solutions | |
| 1. Reduce batch size | |
| 2. Enable CPU offload: | |
| system.pipeline.enable_model_cpu_offload() | |
| 3. Use lower precision: | |
| torch_dtype=torch.float16 | |
| 4. Close other GPU applications | |
| ``` | |
| #### Issue 2: OpenPose Download Fails | |
| ```bash | |
| # Symptoms | |
| Error downloading OpenPose models | |
| # Solutions | |
| 1. Check internet connection | |
| 2. Manual download: | |
| from controlnet_aux import OpenposeDetector | |
| detector = OpenposeDetector.from_pretrained('lllyasviel/Annotators') | |
| 3. Use proxy if behind firewall | |
| ``` | |
| #### Issue 3: Import Errors | |
| ```bash | |
| # Symptoms | |
| ModuleNotFoundError: No module named 'controlnet_aux' | |
| # Solutions | |
| 1. Ensure virtual environment is activated | |
| 2. Reinstall dependencies: | |
| pip install -r requirements.txt --force-reinstall | |
| 3. Check Python version compatibility | |
| ``` | |
| #### Issue 4: Slow Generation | |
| ```bash | |
| # Symptoms | |
| Very slow image generation (>5 minutes) | |
| # Solutions | |
| 1. Verify CUDA is working: | |
| torch.cuda.is_available() | |
| 2. Enable xformers: | |
| pip install xformers | |
| system.pipeline.enable_xformers_memory_efficient_attention() | |
| 3. Use optimized settings: | |
| num_inference_steps=30 # Reduce from 50 | |
| ``` | |
| ### Performance Optimization | |
| #### Memory Optimization | |
| ```python | |
| # Enable memory efficient features | |
| system.pipeline.enable_model_cpu_offload() | |
| system.pipeline.enable_attention_slicing() | |
| # For very low VRAM (6GB) | |
| system.pipeline.enable_sequential_cpu_offload() | |
| ``` | |
| #### Speed Optimization | |
| ```python | |
| # Use compiled model (PyTorch 2.0+) | |
| system.pipeline.unet = torch.compile(system.pipeline.unet) | |
| # Reduce inference steps for speed | |
| result = system.transform_outfit( | |
| source_image="input.jpg", | |
| target_prompt="red dress", | |
| num_inference_steps=30 # Faster than 50 | |
| ) | |
| ``` | |
| ## Verification Tests | |
| ### Basic Functionality Test | |
| ```python | |
| # test_basic.py | |
| from src.fashion_inpainting import FashionInpaintingSystem | |
| from PIL import Image | |
| # Initialize system | |
| system = FashionInpaintingSystem(safety_level='fashion_moderate') | |
| # Load test image | |
| test_image = Image.open('examples/test_input.jpg') | |
| # Test pose extraction | |
| pose_image = system.extract_pose(test_image) | |
| print("✓ Pose extraction working") | |
| # Test basic generation | |
| result = system.transform_outfit( | |
| source_image=test_image, | |
| target_prompt="blue jeans and white t-shirt", | |
| num_inference_steps=20 # Quick test | |
| ) | |
| print("✓ Generation working") | |
| result.save('test_output.jpg') | |
| print("✓ Installation verified successfully!") | |
| ``` | |
| ### Run Verification | |
| ```bash | |
| python test_basic.py | |
| ``` | |
| ## Next Steps | |
| After successful installation: | |
| 1. **Review Documentation**: Read the [API Reference](api_reference.md) | |
| 2. **Check Examples**: Explore the `examples/` directory | |
| 3. **Review Safety**: Read [Safety Guidelines](../SAFETY_GUIDELINES.md) | |
| 4. **Test with Your Images**: Try the system with your own photos | |
| 5. **Explore Advanced Features**: Learn about custom checkpoints and parameters | |
| ## Getting Help | |
| - **Issues**: Report problems on [GitHub Issues](https://github.com/your-org/fashion-inpainting-system/issues) | |
| - **Discussions**: Ask questions in [Discussions](https://github.com/your-org/fashion-inpainting-system/discussions) | |
| - **Commercial Support**: Contact [[email protected]](mailto:[email protected]) | |
| ## System Information | |
| For support requests, please include: | |
| ```bash | |
| # Generate system information | |
| python -c " | |
| import torch, sys, platform | |
| from diffusers import __version__ as diffusers_version | |
| from controlnet_aux import __version__ as controlnet_aux_version | |
| print(f'Python: {sys.version}') | |
| print(f'Platform: {platform.platform()}') | |
| print(f'PyTorch: {torch.__version__}') | |
| print(f'CUDA Available: {torch.cuda.is_available()}') | |
| if torch.cuda.is_available(): | |
| print(f'GPU: {torch.cuda.get_device_name(0)}') | |
| print(f'VRAM: {torch.cuda.get_device_properties(0).total_memory // 1024**3}GB') | |
| print(f'Diffusers: {diffusers_version}') | |
| print(f'ControlNet-AUX: {controlnet_aux_version}') | |
| " | |
| ``` |