How was LFM2 converted to ONNX? Is custom configuration needed for fine-tuned models
#14
by
AmanPriyanshu
- opened
Hi LiquidAI team,
Not sure, if this would be a better place to open this discussion rather than the ONNX-Community: https://huggingface.co/onnx-community/LFM2-1.2B-ONNX/discussions/1
I'm trying to convert a fine-tuned LFM2-1.2B model to ONNX but running into issues with the hybrid architecture (conv + attention layers).
My questions:
- How did you successfully convert the base LFM2 models to ONNX?
- Can you share the custom ONNX configuration you used?
- Is it possible to convert fine-tuned LFM2 models using the same approach?
- Is there a notebook I can follow along?
Use case: I have a fine-tuned LFM2-1.2B model that I need to deploy in ONNX format for edge inference using transformers.js.
Any guidance would be much appreciated! Thanks!
Continued and resolved discussion in ONNX link: https://huggingface.co/onnx-community/LFM2-1.2B-ONNX/discussions/1
AmanPriyanshu
changed discussion status to
closed