Image-Text-to-Text
Transformers
English

Retrospective Learning from Interactions

This repository contains the lil-lab/respect model, based on the ACL paper Retrospective Learning from Interactions. For more resources, please see https://lil-lab.github.io/respect and https://github.com/lil-lab/respect.

Sample Usage

To get started with the model, follow these steps:

1. Setting up Environment

Prepare your conda environment:

conda create -n respect python=3.9.18
pip install -r requirements.txt
pip install -e .

2. Download Data

from datasets import load_dataset

ds = load_dataset("lil-lab/respect", name="turn", split="train")

3. Load Model Checkpoints

Download checkpoints and load the model using transformers and peft:

import torch
from transformers import Idefics2ForConditionalGeneration
from peft import PeftModel

checkpoint = "HuggingFaceM4/idefics2-8b"
model_id = 'lil-lab/respect'

model = Idefics2ForConditionalGeneration.from_pretrained(
    checkpoint, torch_dtype=torch.bfloat16)
peft_model = PeftModel.from_pretrained(
    model, model_id, adapter_name="r6_bp", revision="r6_bp")

Reproducibility

To generate plots from the paper, run analysis/plots.ipynb in the GitHub repository.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for lil-lab/respect

Finetuned
(172)
this model

Space using lil-lab/respect 1

Collection including lil-lab/respect