Trouble with Data Extraction using Custom Schema
Hello Liquid AI team,
I've tried to use both this model and LFM2-1.2B-Extract for data extraction of texts.
However, I find that these models are very prone to errors and hallucinations when I specify a custom schema. As an example,
system:
Identify and extract information matching the following schema. Return data as a JSON object. Missing data should be omitted.
Schema:
- discovery: "what was being discovered/invented"
- person: "the person who made the discovery"
- where: "the location where the discovery was made"
- when: "the time when the discovery was made"
- doi: "doi of the source"
user input:
The lightning rod was invented by Benjamin Franklin in the early 1750s.
model response:
{
"discovery": "lightning rod",
"person": "Benjamin Franklin",
"where": "early 1750s",
"when": "early 1750s",
"doi": "10.1007/978-3-319-25588-9"
}
Clearly, "early 1750s" is not a location, and the model hallucinated a doi when the original text contained none.
If I removed the custom schema from the system prompt, the model is correct a lot more often. However, this results in inconsistent JSON schemas that might include missing/extraneous information which are hard to ingest.
Temperature is set to 0. I'm not sure if I'm doing something wrong? I would love to use your very speedy and lightweight models otherwise, but currently the error rates make them a pass.
Hey @Purplys , sorry for the late reply. I tried reproducing what you described, but I get pretty good results with your example (see this chat with the playground). Can you double-check your generation parameters?
Hi, thanks for your response!
I double checked my generation parameters again. Turns out I had repetition_penalty too high. Toning it down to your recommended settings largely fixed the issues.
By the way, if I may ask, do you have any plans on releasing finetunes of LFM2.5 for extraction? Kind of like the Liquid Nano series for LFM2.
Great! This is something we're exploring at the moment, so that's good feedback. Thanks!