File size: 1,402 Bytes
e0e030b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 |
# Coda-Robotics/OpenVLA-ER-Select-Book-LoRA
## Model Description
This is a LoRA adapter weights only (requires base OpenVLA model) of OpenVLA, fine-tuned on the select_book dataset.
## Training Details
- **Dataset:** select_book
- **Number of Episodes:** 479
- **Batch Size:** 8
- **Training Steps:** 20000
- **Learning Rate:** 2e-5
- **LoRA Configuration:**
- Rank: 32
- Dropout: 0.0
- Target Modules: all-linear
## Usage
```python
from transformers import AutoProcessor, AutoModelForVision2Seq
# Load the model and processor
processor = AutoProcessor.from_pretrained("Coda-Robotics/OpenVLA-ER-Select-Book-LoRA")
model = AutoModelForVision2Seq.from_pretrained("Coda-Robotics/OpenVLA-ER-Select-Book-LoRA")
# Process an image
image = ... # Load your image
inputs = processor(images=image, return_tensors="pt")
outputs = model.generate(**inputs)
text = processor.decode(outputs[0], skip_special_tokens=True)
```
## Using with PEFT
To use this adapter with the base OpenVLA model:
```python
from transformers import AutoProcessor, AutoModelForVision2Seq
from peft import PeftModel, PeftConfig
# Load the base model
base_model = AutoModelForVision2Seq.from_pretrained("openvla/openvla-7b")
# Load the LoRA adapter
adapter_model = PeftModel.from_pretrained(base_model, "{model_name}")
# Merge weights for faster inference (optional)
merged_model = adapter_model.merge_and_unload()
```
|