Edit model card

Llama3-ChatQA-1.5-8B- GGUF

This is Quantized version of nvidia/Llama3-ChatQA-1.5-8B created using llama.cpp

Model Details

We introduce Llama3-ChatQA-1.5, which excels at conversational question answering (QA) and retrieval-augmented generation (RAG). Llama3-ChatQA-1.5 is developed using an improved training recipe from ChatQA (1.0), and it is built on top of Llama-3 base model. Specifically, we incorporate more conversational QA data to enhance its tabular and arithmetic calculation capability. Llama3-ChatQA-1.5 has two variants: Llama3-ChatQA-1.5-8B and Llama3-ChatQA-1.5-70B. Both models were originally trained using Megatron-LM, we converted the checkpoints to Hugging Face format.

Other Resources

Llama3-ChatQA-1.5-70B โ€‚ Evaluation Data โ€‚ Training Data โ€‚ Retriever โ€‚ Paper

Benchmark Results

Results in ChatRAG Bench are as follows:

ChatQA-1.0-7B Command-R-Plus Llama-3-instruct-70b GPT-4-0613 ChatQA-1.0-70B ChatQA-1.5-8B ChatQA-1.5-70B
Doc2Dial 37.88 33.51 37.88 34.16 38.9 39.33 41.26
QuAC 29.69 34.16 36.96 40.29 41.82 39.73 38.82
QReCC 46.97 49.77 51.34 52.01 48.05 49.03 51.40
CoQA 76.61 69.71 76.98 77.42 78.57 76.46 78.44
DoQA 41.57 40.67 41.24 43.39 51.94 49.6 50.67
ConvFinQA 51.61 71.21 76.6 81.28 73.69 78.46 81.88
SQA 61.87 74.07 69.61 79.21 69.14 73.28 83.82
TopioCQA 45.45 53.77 49.72 45.09 50.98 49.96 55.63
HybriDial* 54.51 46.7 48.59 49.81 56.44 65.76 68.27
INSCIT 30.96 35.76 36.23 36.34 31.9 30.1 32.31
Average (all) 47.71 50.93 52.52 53.90 54.14 55.17 58.25
Average (exclude HybriDial) 46.96 51.40 52.95 54.35 53.89 53.99 57.14

Note that ChatQA-1.5 is built based on Llama-3 base model, and ChatQA-1.0 is built based on Llama-2 base model. ChatQA-1.5 used some samples from the HybriDial training dataset. To ensure fair comparison, we also compare average scores excluding HybriDial. The data and evaluation scripts for ChatRAG Bench can be found here.

Prompt Format

We highly recommend that you use the prompt format we provide, as follows:

when context is available

System: {System}

{Context}

User: {Question}

Assistant: {Response}

User: {Question}

Assistant:

when context is not available

System: {System}

User: {Question}

Assistant: {Response}

User: {Question}

Assistant:

The content of the system's turn (i.e., {System}) for both scenarios is as follows:

This is a chat between a user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions based on the context. The assistant should also indicate when the answer cannot be found in the context.

Note that our ChatQA-1.5 models are optimized for the capability with context, e.g., over documents or retrieved context.

License

The use of this model is governed by the META LLAMA 3 COMMUNITY LICENSE AGREEMENT

Downloads last month
1,080
GGUF
Model size
8.03B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for QuantFactory/NVIDIA-Llama3-ChatQA-1.5-8B-GGUF

Quantized
(18)
this model