Liquid AI

LFM2-1.2B-RAG

Based on LFM2-1.2B, LFM2-1.2B-RAG is specialized in answering questions based on provided contextual documents, for use in RAG (Retrieval-Augmented Generation) systems.

Use cases:

  • Chatbot to ask questions about the documentation of a particular product.
  • Custom support with an internal knowledge base to provide grounded answers.
  • Academic research assistant with multi-turn conversations about research papers and course materials.

You can find more information about other task-specific models in this blog post.

πŸ“„ Model details

Generation parameters: We recommend using greedy decoding with a temperature=0.

System prompt: The system prompt is optional. You can force the output's language, for example, using "Always respond in English, regardless of the user's input language." By default, the output's language follows the user prompt's language.

Supported languages: English, Arabic, Chinese, French, German, Japanese, Korean, Portuguese, and Spanish.

68d417d4e3a23b976f25091a_Model Library_Prompt + Answer (Medium)_Lightmode

Training approach: We fine-tuned the LFM2-1.2B-RAG model on a dataset that includes 1M+ samples of multi-turn interactions and multi-document samples consisting of a mix of curated open source documents as well as generated synthetic ones.

Chat template: LFM2 uses a ChatML-like chat template as follows:

<|startoftext|><|im_start|>user
Use the following context to answer questions:
Beach soccer differs significantly from its grass-rooted counterpart. [...]<|im_end|>
<|im_start|>assistant
Each team in a beach soccer match consists of five players, including a goalkeeper.{<|im_end|>

You can automatically apply it using the dedicated .apply_chat_template() function from Hugging Face transformers.

⚠️ The model supports both single-turn and multi-turn conversations.

RAG systems enable AI solutions to include new, up-to-date, and potentially proprietary information in LLM responses that was not present in the training data. When a user asks a question, the retrieval component locates and delivers related documents from a knowledge base, and then the RAG generator model answers the question based on facts from those contextual documents.

πŸƒ How to run

πŸ“¬ Contact

If you are interested in custom solutions with edge deployment, please contact our sales team.

Downloads last month
85
Safetensors
Model size
1.17B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for LiquidAI/LFM2-1.2B-RAG

Base model

LiquidAI/LFM2-1.2B
Finetuned
(26)
this model
Quantizations
4 models

Collection including LiquidAI/LFM2-1.2B-RAG