How was LFM2 converted to ONNX? Is custom configuration needed for fine-tuned models

#14
by AmanPriyanshu - opened

Hi LiquidAI team,

Not sure, if this would be a better place to open this discussion rather than the ONNX-Community: https://huggingface.co/onnx-community/LFM2-1.2B-ONNX/discussions/1

I'm trying to convert a fine-tuned LFM2-1.2B model to ONNX but running into issues with the hybrid architecture (conv + attention layers).

My questions:

  1. How did you successfully convert the base LFM2 models to ONNX?
  2. Can you share the custom ONNX configuration you used?
  3. Is it possible to convert fine-tuned LFM2 models using the same approach?
  4. Is there a notebook I can follow along?

Use case: I have a fine-tuned LFM2-1.2B model that I need to deploy in ONNX format for edge inference using transformers.js.
Any guidance would be much appreciated! Thanks!

Continued and resolved discussion in ONNX link: https://huggingface.co/onnx-community/LFM2-1.2B-ONNX/discussions/1

AmanPriyanshu changed discussion status to closed

Sign up or log in to comment