Splitting the onnx vision and text models
#19
by
hawkeye217
- opened
For v1, the onnx models provided were split apart - one for vision and the other for text embeddings (here: https://huggingface.co/jinaai/jina-clip-v1/tree/main/onnx). Are there any plans to do this for v2?
Hey @hawkeye217 , thanks for reaching out! There is no plan to proceed with this at the moment. You can use zero sized tensors to disable either the image or the text encoder https://huggingface.co/jinaai/jina-clip-v2/discussions/12#67445e1ae8ad555f8d307322
Perfect, thanks!
hawkeye217
changed discussion status to
closed