ONNX

You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

PERFORM Models

This repository contains code and models implemented during the PERFORM project for automated detection and segmentation of photovoltaic panels and trees from aerial imagery using deep learning models deployed with NVIDIA Triton Inference Server.

Prerequisites

  • Docker and Docker Compose
  • NVIDIA GPU with Docker GPU support (recommended)
  • At least 32GB RAM for Triton server
  • Input imagery in GeoTIFF format

Installation and Usage

1. Clone the repository

git clone <repository-url>
cd perform-models

2. Set up environment

# Set your user ID for proper file permissions
export UID=$(id -u)
export GID=$(id -g)

3. Prepare your data

  • Place input GeoTIFF files in the inputs/ directory
  • Results will be saved to the outputs/ directory

4. Run inference

PV Panels Detection

# Basic usage
docker-compose run cli panels --model_name perform_pv_panels

# With custom resolution (default: 25cm/pixel)
docker-compose run cli panels --model_name perform_pv_panels --target_resolution 0.3

# Custom input/output paths
docker-compose run cli panels \
  --model_name perform_pv_panels \
  --input_path /app/inputs \
  --output_path /app/outputs

Rooftop Detection

# Basic usage
docker-compose run cli rooftop --model_name perform_rooftop

# With custom resolution (default: 25cm/pixel)
docker-compose run cli rooftop --model_name perform_rooftop --target_resolution 0.3

# Custom input/output paths
docker-compose run cli rooftop \
  --model_name perform_rooftop \
  --input_path /app/inputs \
  --output_path /app/outputs

Tree Segmentation

# Basic usage
docker-compose run cli trees

# With custom model
docker-compose run cli trees --model_name custom_tree_model

# Custom paths
docker-compose run cli trees \
  --input_path /app/inputs \
  --output_path /app/outputs

5. View help

# General help
docker-compose run cli --help

# Subcommand help
docker-compose run cli panels --help
docker-compose run cli trees --help

Project Structure

β”œβ”€β”€ docker/                    # Docker configuration files
β”‚   β”œβ”€β”€ python.dockerfile     # Python CLI container
β”‚   └── triton.dockerfile     # Triton server container
β”œβ”€β”€ inputs/                   # Input GeoTIFF files (mount point)
β”œβ”€β”€ outputs/                  # Output results (mount point)
β”œβ”€β”€ model_repository/         # Triton model repository
β”‚   └── ...
β”œβ”€β”€ src/perform/             # Python package source
β”œβ”€β”€ docker-compose.yml      # Main compose configuration
β”œβ”€β”€ docker-compose.gpu.yml  # GPU-specific overrides
β”œβ”€β”€ pyproject.toml          # Python package configuration
└── requirements.txt        # Python dependencies

Models

PV Panels Detection

  • Input: RGB aerial imagery (8-10cm/pixel, automatically resampled to 25cm/pixel)
  • Output:
    • Segmentation masks (GeoTIFF)
    • Vectorized polygons with building regularization (Parquet)
  • Model: perform_pv_panels

Rooftop Detection

  • Input: RGB aerial imagery (8-10cm/pixel, automatically resampled to 25cm/pixel)
  • Outputs:
    • Segmentation masks (GeoTIFF) with roof orientation
      1. Background
      2. Flat
      3. North
      4. East
      5. South
      6. West
    • Binary segmentation masks (GeoTIFF) with roof edges
  • Model: perform_rooftop

Tree Segmentation

  • Input: RGB aerial imagery (TIFF format)
  • Output: Tree polygons (Parquet format)
  • Model: perform_tree_crown

Configuration

Environment Variables

Create a .env file:

UID=1000
GID=1000

GPU Support

For GPU acceleration, use the GPU compose override:

docker-compose -f docker-compose.yml -f docker-compose.gpu.yml run cli panels --model_name perform_pv_panels

Server Configuration

  • Triton server runs on ports 8000 (HTTP), 8001 (gRPC), 8002 (metrics)
  • Health check endpoint: http://localhost:8000/v2/health/ready
  • The CLI automatically waits for Triton to be ready before starting inference

Output Formats

  • Rasters: GeoTIFF format with preserved georeferencing
  • Vectors: Parquet format with geometry and attributes
  • Naming: Output files use the same name as input files with appropriate extensions

Development

Local Installation

# Install in development mode
pip install -e .

# Run locally (requires Triton server)
perform panels --model_name perform_pv_panels --server_url localhost:8000

Adding New Models

  1. Add model to model_repository/ following Triton format
  2. Implement inference logic in src/perform/cli.py
  3. Add new subcommand to CLI

License

This project is licensed under CC BY-NC-SA 4.0 - see the LICENSE file for details.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support