PERFORM Models
This repository contains code and models implemented during the PERFORM project for automated detection and segmentation of photovoltaic panels and trees from aerial imagery using deep learning models deployed with NVIDIA Triton Inference Server.
Prerequisites
- Docker and Docker Compose
- NVIDIA GPU with Docker GPU support (recommended)
- At least 32GB RAM for Triton server
- Input imagery in GeoTIFF format
Installation and Usage
1. Clone the repository
git clone <repository-url>
cd perform-models
2. Set up environment
# Set your user ID for proper file permissions
export UID=$(id -u)
export GID=$(id -g)
3. Prepare your data
- Place input GeoTIFF files in the
inputs/
directory - Results will be saved to the
outputs/
directory
4. Run inference
PV Panels Detection
# Basic usage
docker-compose run cli panels --model_name perform_pv_panels
# With custom resolution (default: 25cm/pixel)
docker-compose run cli panels --model_name perform_pv_panels --target_resolution 0.3
# Custom input/output paths
docker-compose run cli panels \
--model_name perform_pv_panels \
--input_path /app/inputs \
--output_path /app/outputs
Rooftop Detection
# Basic usage
docker-compose run cli rooftop --model_name perform_rooftop
# With custom resolution (default: 25cm/pixel)
docker-compose run cli rooftop --model_name perform_rooftop --target_resolution 0.3
# Custom input/output paths
docker-compose run cli rooftop \
--model_name perform_rooftop \
--input_path /app/inputs \
--output_path /app/outputs
Tree Segmentation
# Basic usage
docker-compose run cli trees
# With custom model
docker-compose run cli trees --model_name custom_tree_model
# Custom paths
docker-compose run cli trees \
--input_path /app/inputs \
--output_path /app/outputs
5. View help
# General help
docker-compose run cli --help
# Subcommand help
docker-compose run cli panels --help
docker-compose run cli trees --help
Project Structure
βββ docker/ # Docker configuration files
β βββ python.dockerfile # Python CLI container
β βββ triton.dockerfile # Triton server container
βββ inputs/ # Input GeoTIFF files (mount point)
βββ outputs/ # Output results (mount point)
βββ model_repository/ # Triton model repository
β βββ ...
βββ src/perform/ # Python package source
βββ docker-compose.yml # Main compose configuration
βββ docker-compose.gpu.yml # GPU-specific overrides
βββ pyproject.toml # Python package configuration
βββ requirements.txt # Python dependencies
Models
PV Panels Detection
- Input: RGB aerial imagery (8-10cm/pixel, automatically resampled to 25cm/pixel)
- Output:
- Segmentation masks (GeoTIFF)
- Vectorized polygons with building regularization (Parquet)
- Model:
perform_pv_panels
Rooftop Detection
- Input: RGB aerial imagery (8-10cm/pixel, automatically resampled to 25cm/pixel)
- Outputs:
- Segmentation masks (GeoTIFF) with roof orientation
- Background
- Flat
- North
- East
- South
- West
- Binary segmentation masks (GeoTIFF) with roof edges
- Segmentation masks (GeoTIFF) with roof orientation
- Model:
perform_rooftop
Tree Segmentation
- Input: RGB aerial imagery (TIFF format)
- Output: Tree polygons (Parquet format)
- Model:
perform_tree_crown
Configuration
Environment Variables
Create a .env
file:
UID=1000
GID=1000
GPU Support
For GPU acceleration, use the GPU compose override:
docker-compose -f docker-compose.yml -f docker-compose.gpu.yml run cli panels --model_name perform_pv_panels
Server Configuration
- Triton server runs on ports 8000 (HTTP), 8001 (gRPC), 8002 (metrics)
- Health check endpoint:
http://localhost:8000/v2/health/ready
- The CLI automatically waits for Triton to be ready before starting inference
Output Formats
- Rasters: GeoTIFF format with preserved georeferencing
- Vectors: Parquet format with geometry and attributes
- Naming: Output files use the same name as input files with appropriate extensions
Development
Local Installation
# Install in development mode
pip install -e .
# Run locally (requires Triton server)
perform panels --model_name perform_pv_panels --server_url localhost:8000
Adding New Models
- Add model to
model_repository/
following Triton format - Implement inference logic in
src/perform/cli.py
- Add new subcommand to CLI
License
This project is licensed under CC BY-NC-SA 4.0 - see the LICENSE file for details.
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support