How to output in a Structured format with pydantic model in transformers model?
#10 opened about 12 hours ago
by
Vishva007
Merging "Qwen2.5-7B-Instruct" text adapter into "Qwen2.5-VL-7B-Instruct" model
#9 opened about 13 hours ago
by
rameshch
Video Inference - TypeError: process_vision_info() got an unexpected keyword argument 'return_video_kwargs'
1
#8 opened 1 day ago
by
rameshch
Base models?
#7 opened 2 days ago
by
dbasu
Ollama support?
#6 opened 2 days ago
by
YarvixPA
some scores for minicpm-o 2.6 are not quite correct
#5 opened 2 days ago
by
yuzaa
Provided code snippet not working?
2
#4 opened 3 days ago
by
alexpong
Why are the 14b~32b models ignored again?
#3 opened 3 days ago
by
win10
Exception: Could not find the transformer layer class to wrap in the model.
#2 opened 3 days ago
by
atishay-scribe
remove base_model
#1 opened 3 days ago
by
victor