Fix code typo for Simple and Video Inference examples

#9
by kubistmi - opened

Hello 👋
When trying to reproduce the examples from the model card, I got an error when calling model.generate(...):
RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (CUDABFloat16Type) should be the same

I fixed it by adding param dtype=torch.bfloat16 when moving the processed inputs to CUDA (this is included in the last example "Multi-image Interleaved Inference", but not in the first two)
This should avoid mismatched tensor types between model weights and inputs.

Cheers,
Michal

mfarre changed pull request status to merged

Sign up or log in to comment