How was LFM2 converted to ONNX? Is custom configuration needed for fine-tuned models

#1
by AmanPriyanshu - opened

Hi ONNX community team,

I'm trying to convert a fine-tuned LFM2-1.2B model to ONNX but running into issues with the hybrid architecture (conv + attention layers).

My questions:

  1. How did you successfully convert the base LFM2 models to ONNX?
  2. Can you share the custom ONNX configuration you used?
  3. Is it possible to convert fine-tuned LFM2 models using the same approach?
  4. Is there a notebook I can follow along?

Use case: I have a fine-tuned LFM2-1.2B model that I need to deploy in ONNX format for edge inference using transformers.js.

Any guidance would be much appreciated! Thanks!

ONNX Community org

Hi there πŸ‘‹ I wrote a custom conversion script for it, which I aim to release soon. In the meantime, if you have the model saved on the HF hub, I'm more than happy to open a PR for it.

@Xenova Thanks so much for the offer! That's really kind of you. Big fan of transformers.js, it's been incredible for browser ML.

While I really appreciate the offer to open a PR, I think I'll wait for the official script release if that's cool though. Would love to see how you approached it and I'm sure it'll help others too.

Really appreciate you taking the time to help! Looking forward to the release πŸ™

AmanPriyanshu changed discussion status to closed

Sign up or log in to comment