πŸ–₯️ API / Platform   |   πŸ“‘ Blog   |   πŸ—£οΈ Discord   |   πŸ”— GitHub

NuExtract 2.0 8B GGUF by NuMind πŸ”₯

NuExtract 2.0 is a family of models trained specifically for structured information extraction tasks. It supports both multimodal inputs and is multilingual.

We provide several versions of different sizes, all based on pre-trained models from the QwenVL family.

Model Size Model Name Base Model License Huggingface Link
2B NuExtract-2.0-2B Qwen2-VL-2B-Instruct MIT πŸ€— NuExtract-2.0-2B
2B NuExtract-2.0-2B-GGUF Qwen2-VL-2B-Instruct MIT πŸ€— NuExtract-2.0-2B-GGUF
4B NuExtract-2.0-4B Qwen2.5-VL-3B-Instruct Qwen Research License πŸ€— NuExtract-2.0-4B
4B NuExtract-2.0-4B-GGUF Qwen2.5-VL-3B-Instruct Qwen Research License πŸ€— NuExtract-2.0-4B-GGUF
8B NuExtract-2.0-8B Qwen2.5-VL-7B-Instruct MIT πŸ€— NuExtract-2.0-8B
8B NuExtract-2.0-8B-GGUF Qwen2.5-VL-7B-Instruct MIT πŸ€— NuExtract-2.0-8B-GGUF

❗️Note: NuExtract-2.0-2B is based on Qwen2-VL rather than Qwen2.5-VL because the smallest Qwen2.5-VL model (3B) has a more restrictive, non-commercial license. We therefore include NuExtract-2.0-2B as a small model option that can be used commercially.

Benchmark

Performance on collection of ~1,000 diverse extraction examples containing both text and image inputs.

Overview

To use the model, provide an input text/image and a JSON template describing the information you need to extract. The template should be a JSON object, specifying field names and their expected type.

Support types include:

  • verbatim-string - instructs the model to extract text that is present verbatim in the input.
  • string - a generic string field that can incorporate paraphrasing/abstraction.
  • integer - a whole number.
  • number - a whole or decimal number.
  • date-time - ISO formatted date.
  • Array of any of the above types (e.g. ["string"])
  • enum - a choice from set of possible answers (represented in template as an array of options, e.g. ["yes", "no", "maybe"]).
  • multi-label - an enum that can have multiple possible answers (represented in template as a double-wrapped array, e.g. [["A", "B", "C"]]).

If the model does not identify relevant information for a field, it will return null or [] (for arrays and multi-labels).

The following is an example template:

{
  "first_name": "verbatim-string",
  "last_name": "verbatim-string",
  "description": "string",
  "age": "integer",
  "gpa": "number",
  "birth_date": "date-time",
  "nationality": ["France", "England", "Japan", "USA", "China"],
  "languages_spoken": [["English", "French", "Japanese", "Mandarin", "Spanish"]]
}

An example output:

{
  "first_name": "Susan",
  "last_name": "Smith",
  "description": "A student studying computer science.",
  "age": 20,
  "gpa": 3.7,
  "birth_date": "2005-03-01",
  "nationality": "England",
  "languages_spoken": ["English", "French"]
}

⚠️ We recommend using NuExtract with a temperature at or very close to 0. Some inference frameworks, such as Ollama, use a default of 0.7 which is not well suited to many extraction tasks.

Using NuExtract with llama.cpp

Download the model

mkdir models
hf download numind/NuExtract-2.0-8B-GGUF --local-dir ./models

Start the llama.cpp server

docker run --gpus all -it -p 8000:8080 -v ./models:/models --entrypoint /app/llama-server ghcr.io/ggml-org/llama.cpp:full-cuda -m /models/NuExtract-2.0-8B-Q8_0.gguf --mmproj /models/mmproj-BF16.gguf --host 0.0.0.0

Text Extraction

The docker run command above maps the port 8080 of the llama.cpp container to the port 8000 of the host.

import openai
import json

client = openai.OpenAI(
    api_key="EMPTY",
    base_url="http://localhost:8000",
)

llama.cpp is not compatible with vllm's chat_template_kwargs. Thus, the template has to be applied manually

Text extraction

flight_text = """Date: Tuesday March 25th 2025
User info: Male, 32 yo

Book me a flight this Saturday morning to go to Marrakesh and come back on April 5th. I want it to be business class. Air France if possible."""
flight_template = """{
    "Destination": "verbatim-string",
    "Departure date range": {
        "beginning": "date-time",
        "end": "date-time"
    },
    "Return date range": {
        "beginning": "date-time",
        "end": "date-time"
    },
    "Requested Class": [
        "1st",
        "business",
        "economy"
    ],
    "Preferred airlines": [
        "string"
    ]
}"""

response = client.chat.completions.create(
    model="NuExtract",
    temperature=0.0,
    messages=[
        {
            "role": "user",
            "content": [
                {
                    "type": "text", 
                    "text": f"# Template:\n{json.dumps(json.loads(flight_template), indent=4)}\n{flight_text}",
                },
            ],
        },
    ],
)

Image Extraction

identity_template = """{
    "Last name": "verbatim-string",
    "First names": [
        "verbatim-string"
    ],
    "Document number": "verbatim-string",
    "Date of birth": "date-time",
    "Gender": [
        "Male",
        "Female",
        "Other"
    ],
    "Expiration date": "date-time",
    "Country ISO code": "string"
}"""

response = client.chat.completions.create(
    model="NuExtract",
    temperature=0.0,
    messages=[
        {
            "role": "user",
            "content": [
                {
                    "type": "text", 
                    "text": f"# Template:\n{json.dumps(json.loads(identity_template), indent=4)}\n<image>",
                },
                {
                    "type": "image_url",
                    "image_url": {
                        "url": f"https://upload.wikimedia.org/wikipedia/commons/thumb/4/49/Carte_identit%C3%A9_%C3%A9lectronique_fran%C3%A7aise_%282021%2C_recto%29.png/2880px-Carte_identit%C3%A9_%C3%A9lectronique_fran%C3%A7aise_%282021%2C_recto%29.png"
                    },
                },
            ],
        },
    ],
)
Downloads last month
511
GGUF
Model size
7.62B params
Architecture
qwen2vl
Hardware compatibility
Log In to view the estimation

1-bit

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Collection including numind/NuExtract-2.0-8B-GGUF