MoMA Model Card

Model details

Model type: MoMA is an open-source image personalization model. It has new attention layers and a multi-modal large language model fine-tuned from LLaVA-7B.

Paper or resources for more information:

Where to send questions or comments about the model: https://github.com/bytedance/MoMA/tree/main

Intended use

Primary intended uses: The primary use is research on personalized image generation tasks.

Primary intended users: The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.

Downloads last month
113
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model authors have turned it off explicitly.

Spaces using KunpengSong/MoMA_llava_7b 2