πŸ”₯ InternVL3-8B-FP8-Dynamic: Optimized Vision-Language Model πŸ”₯

This is a FP8 dynamic quantized version of OpenGVLab/InternVL3-8B, optimized for high-performance inference with vLLM. The model utilizes dynamic FP8 quantization for optimal ease of use and deployment, achieving significant speedup with minimal accuracy degradation on vision-language tasks.

πŸ”§ Usage

With vLLM (Recommended)

from vllm import LLM, SamplingParams

# Load the quantized model
model = LLM(
    model="brandonbeiler/InternVL3-8B-FP8-Dynamic",
    trust_remote_code=True,
    max_model_len=8192,
    tensor_parallel_size=1,  # Adjust based on your GPU setup
)
# Generate response
sampling_params = SamplingParams(temperature=0.7, max_tokens=512)
response = model.generate("Describe this image: <image>", sampling_params)
print(response[0].outputs[0].text)

πŸš€ Key Features

  • Vision-Language Optimized: Specialized quantization recipe that preserves visual understanding
  • vLLM Ready: Seamless integration with vLLM for production deployment
  • Memory Efficient: ~50% memory reduction compared to FP16 original
  • Performance Boost: Significant faster inference on H100/L40S GPUs

πŸ“Š Model Details

  • Original Model: OpenGVLab/InternVL3-8B
  • Source Model: OpenGVLab/InternVL3-8B
  • Quantized Model: InternVL3-8B-FP8-Dynamic
  • Quantization Method: FP8 Dynamic (W8A8)
  • Quantization Library: LLM Compressor v0.5.2.dev112+g6800f811
  • Quantized by: brandonbeiler

πŸ—οΈ Technical Specifications

Hardware Requirements

  • Inference: 7.8GB VRAM (+ Context)
  • Supported GPUs: H100, L40S, A100 (80GB), RTX 4090 (2x for tensor parallelism)
  • GPU Architecture: Ada Lovelace, Hopper (for optimal FP8 performance)

Quantization Details

  • Weights: FP8 E4M3 with dynamic per-tensor scales
  • Activations: FP8 E4M3 with dynamic per-tensor scales
  • Preserved Components: Vision tower, embeddings, normalization layers, mlp1

πŸ”¬ Package Versions

This model was created using:

llmcompressor==0.5.2.dev112+g6800f811
compressed-tensors==latest
transformers==4.52.4
torch==2.7.0
vllm==0.9.1

Quantized with ❀️ using LLM Compressor for the open-source community

Downloads last month
15
Safetensors
Model size
7.95B params
Tensor type
BF16
Β·
F8_E4M3
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for brandonbeiler/InternVL3-8B-FP8-Dynamic