YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
Coda-Robotics/OpenVLA-ER-Select-Book-LoRA
Model Description
This is a LoRA adapter weights only (requires base OpenVLA model) of OpenVLA, fine-tuned on the select_book dataset.
Training Details
- Dataset: select_book
- Number of Episodes: 479
- Batch Size: 8
- Training Steps: 20000
- Learning Rate: 2e-5
- LoRA Configuration:
- Rank: 32
- Dropout: 0.0
- Target Modules: all-linear
Usage
from transformers import AutoProcessor, AutoModelForVision2Seq
# Load the model and processor
processor = AutoProcessor.from_pretrained("Coda-Robotics/OpenVLA-ER-Select-Book-LoRA")
model = AutoModelForVision2Seq.from_pretrained("Coda-Robotics/OpenVLA-ER-Select-Book-LoRA")
# Process an image
image = ... # Load your image
inputs = processor(images=image, return_tensors="pt")
outputs = model.generate(**inputs)
text = processor.decode(outputs[0], skip_special_tokens=True)
Using with PEFT
To use this adapter with the base OpenVLA model:
from transformers import AutoProcessor, AutoModelForVision2Seq
from peft import PeftModel, PeftConfig
# Load the base model
base_model = AutoModelForVision2Seq.from_pretrained("openvla/openvla-7b")
# Load the LoRA adapter
adapter_model = PeftModel.from_pretrained(base_model, "{model_name}")
# Merge weights for faster inference (optional)
merged_model = adapter_model.merge_and_unload()
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support