The Dataset Viewer has been disabled on this dataset.

vLLM Inference Scripts

Ready-to-run UV scripts for GPU-accelerated inference using vLLM.

These scripts use UV's inline script metadata to automatically manage dependencies - just run with uv run and everything installs automatically!

πŸ“‹ Available Scripts

classify-dataset.py

Batch text classification using BERT-style encoder models with vLLM's optimized inference engine.

Note: This script is specifically for encoder-only classification models, not generative LLMs.

Features:

  • πŸš€ High-throughput batch processing
  • 🏷️ Automatic label mapping from model config
  • πŸ“Š Confidence scores for predictions
  • πŸ€— Direct integration with Hugging Face Hub

Usage:

# Local execution (requires GPU)
uv run classify-dataset.py \
    davanstrien/ModernBERT-base-is-new-arxiv-dataset \
    username/input-dataset \
    username/output-dataset \
    --inference-column text \
    --batch-size 10000

HF Jobs execution:

hfjobs run \
    --flavor l4x1 \
    --secret HF_TOKEN=$(python -c "from huggingface_hub import HfFolder; print(HfFolder.get_token())") \
    vllm/vllm-openai:latest \
    /bin/bash -c '
        uv run https://huggingface.co/datasets/uv-scripts/vllm/resolve/main/classify-dataset.py \
            davanstrien/ModernBERT-base-is-new-arxiv-dataset \
            username/input-dataset \
            username/output-dataset \
            --inference-column text \
            --batch-size 100000
    ' \
    --project vllm-classify \
    --name my-classification-job

🎯 Requirements

All scripts in this collection require:

  • NVIDIA GPU with CUDA support
  • Python 3.10+
  • UV package manager (install UV)

πŸš€ Performance Tips

GPU Selection

  • L4 GPU (--flavor l4x1): Best value for classification tasks
  • A10 GPU (--flavor a10): Higher memory for larger models
  • Adjust batch size based on GPU memory

Batch Sizes

  • Local GPUs: Start with 10,000 and adjust based on memory
  • HF Jobs: Can use larger batches (50,000-100,000) with cloud GPUs

πŸ“š About vLLM

vLLM is a high-throughput inference engine optimized for:

  • Fast model serving with PagedAttention
  • Efficient batch processing
  • Support for various model architectures
  • Seamless integration with Hugging Face models

πŸ”§ Technical Details

UV Script Benefits

  • Zero setup: Dependencies install automatically on first run
  • Reproducible: Locked dependencies ensure consistent behavior
  • Self-contained: Everything needed is in the script file
  • Direct execution: Run from local files or URLs

Dependencies

Scripts use UV's inline metadata with custom package indexes for vLLM's optimized builds:

# /// script
# requires-python = ">=3.10"
# dependencies = ["vllm", "datasets", "torch", ...]
# 
# [[tool.uv.index]]
# url = "https://flashinfer.ai/whl/cu126/torch2.6"
# 
# [[tool.uv.index]]
# url = "https://wheels.vllm.ai/nightly"
# ///

Docker Image

For HF Jobs, we use the official vLLM Docker image: vllm/vllm-openai:latest

This image includes:

  • Pre-installed CUDA libraries
  • vLLM and all dependencies
  • UV package manager
  • Optimized for GPU inference

πŸ“ Contributing

Have a vLLM script to share? We welcome contributions that:

  • Solve real inference problems
  • Include clear documentation
  • Follow UV script best practices
  • Include HF Jobs examples

πŸ”— Resources

Downloads last month
49