File size: 2,703 Bytes
b73191b 25c2738 a2eff65 b73191b a2eff65 260e7b7 83f9e85 cd15718 b73191b 25c2738 b73191b 25c2738 6bcbf48 0313358 6bcbf48 25c2738 dcf9cc0 25c2738 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 |
---
language:
- en
license: apache-2.0
size_categories:
- n<1K
task_categories:
- text-classification
pretty_name: TL (Test vs Learn) chatbot prompts
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 10111
num_examples: 87
- name: test
num_bytes: 8626
num_examples: 94
download_size: 15605
dataset_size: 18737
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
tags:
- llms
- nlp
- chatbots
- prompts
---
This dataset contains manually labeled examples used for training and testing [reddgr/tl-test-learn-prompt-classifier](https://huggingface.co/reddgr/tl-test-learn-prompt-classifier), a fine-tuning of DistilBERT that classifies chatbot prompts as either 'test' or 'learn.'
Prompts labeled as 'test' (1) are those where it can be inferred that the user is:
- Presenting a problem that requires complex reasoning or arithmetic logic to resolve.
- Intentionally 'challenging' the conversational tool with a complicated question the user might know the answer to.
- Applying prompt engineering techniques such as "chain of thought" or role play.
- Presenting a highly subjective question the user makes with the purpose of testing the tool rather than learning from it or obtaining a specific unknown information.
Prompts labeled as 'instruction' (0) are those containing straightforward questions or requests where it can be inferred the user expects to learn something or obtain valuable/practical information from the interaction.
An alternative naming convention for the labels is 'problem' (test) vs 'instruction' (learn). The earliest versions of the reddgr/tl-test-learn-prompt-classifier model used a zero-shot classification pipeline for those two specific terms: instruction (0) vs problem (1).
Important note about accuracy metrics: coding questions, involving programming language syntax, are a category of their own and are typically difficult to categorize with this dataset.
This dataset and the model are part of a project aimed at identifying metrics to quantitatively measure the conversational quality of text generated by large language models (LLMs) and, by extension, any other type of text extracted from a conversational context (customer service chats, social media posts...).
Relevant Jupyter notebooks and Python scripts that use this dataset and related datasets and models can be found in the following GitHub repository:
[reddgr/chatbot-response-scoring-scbn-rqtl](https://github.com/reddgr/chatbot-response-scoring-scbn-rqtl)
## Labels:
- **0**: Learn (instruction)
- **1**: Test (problem) |