Edit model card

Tarsier Model Card

Model details

Model type: Tarsier-7b is one of the Tarsier family -- an open-source large-scale video-language models, which is designed to generate high-quality video descriptions, together with good capability of general video understanding (Tarsier-34b gains SOTA results on 6 open benchmarks). Base LLM: liuhaotian/llava-v1.6-vicuna-7b

Model date: Tarsier-7b was trained in June 2024.

Paper or resources for more information:

License

lmsys/vicuna-7b-v1.5 license.

Where to send questions or comments about the model: https://github.com/bytedance/tarsier/issues

Intended use

Primary intended uses: The primary use of Tarsier is research on large multimodal models, especially video description.

Primary intended users: The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.

Training dataset

Tarsier tasks a two-stage training strategy.

  • Stage-1: Multi-task Pre-training on 13M data
  • Stage-2: Multi-grained Instruction Tuning on 500K data

In both stages, we freeze ViT and train all the parameters of projection layer and LLM.

Evaluation dataset

How to Use

see https://github.com/bytedance/tarsier?tab=readme-ov-file#usage

Downloads last month
6,669
Safetensors
Model size
7.06B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.