Datasets:

Modalities:
Text
Formats:
json
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:

Improve knights-and-knaves dataset card

#2
by nielsr HF staff - opened
Files changed (1) hide show
  1. README.md +16 -14
README.md CHANGED
@@ -1,9 +1,12 @@
1
  ---
 
 
2
  license: cc-by-nc-sa-4.0
 
 
3
  task_categories:
4
  - question-answering
5
- language:
6
- - en
7
  configs:
8
  - config_name: train
9
  data_files:
@@ -54,16 +57,14 @@ configs:
54
  tags:
55
  - logical
56
  - reasoning
57
- pretty_name: K
58
- size_categories:
59
- - 1K<n<10K
60
  ---
61
 
 
62
 
 
63
 
64
- # ๐Ÿ“˜ knights-and-knaves Dataset [[Project Page]](https://memkklogic.github.io/)
65
-
66
- The **knights-and-knaves dataset** serves as a logical reasoning benchmark to evaluate the reasoning capabilities of LLMs.
67
 
68
  **๐Ÿš€๐Ÿš€ Check out the [perturbed knights-and-knaves dataset](https://huggingface.co/datasets/K-and-K/perturbed-knights-and-knaves) to evaluate the memorization of LLMs in reasoning.**
69
 
@@ -75,16 +76,17 @@ To load the dataset:
75
  from datasets import load_dataset
76
  data_subject = load_dataset('K-and-K/knights-and-knaves','test',split="2ppl")
77
  ```
78
- * Available subset: `test`, `train`.
79
- * Available split: `2ppl`,`3ppl`,`4ppl`,`5ppl`,`6ppl`,`7ppl`,`8ppl`.
80
 
81
- ## ๐Ÿ› ๏ธ Codebase
82
 
83
- To evaluate LLMs on our datasets, visit our [GitHub repository](https://github.com/AlphaPav/mem-kk-logic/).
 
 
84
 
85
- ## โญ Citing our Work
86
 
87
- If you find our codebase and datasets beneficial, kindly cite our work:
88
 
89
  ```bibtex
90
  @article{xie2024memorization,
 
1
  ---
2
+ language:
3
+ - en
4
  license: cc-by-nc-sa-4.0
5
+ size_categories:
6
+ - 1K<n<10K
7
  task_categories:
8
  - question-answering
9
+ pretty_name: Knights and Knaves Logical Reasoning Benchmark
 
10
  configs:
11
  - config_name: train
12
  data_files:
 
57
  tags:
58
  - logical
59
  - reasoning
60
+ - knights-and-knaves
61
+ - memorization
 
62
  ---
63
 
64
+ # Knights and Knaves Logical Reasoning Benchmark [[Project Page]](https://memkklogic.github.io/)
65
 
66
+ This dataset provides a dynamically generated benchmark for evaluating the logical reasoning capabilities of Large Language Models (LLMs), with a specific focus on quantifying memorization effects. The benchmark is based on Knights and Knaves puzzles of varying complexity, allowing for a nuanced investigation of how LLMs balance memorization and genuine reasoning. A key feature is the ability to assess generalization by introducing perturbations to the training puzzles.
67
 
 
 
 
68
 
69
  **๐Ÿš€๐Ÿš€ Check out the [perturbed knights-and-knaves dataset](https://huggingface.co/datasets/K-and-K/perturbed-knights-and-knaves) to evaluate the memorization of LLMs in reasoning.**
70
 
 
76
  from datasets import load_dataset
77
  data_subject = load_dataset('K-and-K/knights-and-knaves','test',split="2ppl")
78
  ```
79
+ * Available subsets: `test`, `train`.
80
+ * Available splits: `2ppl`,`3ppl`,`4ppl`,`5ppl`,`6ppl`,`7ppl`,`8ppl`. (Number of people involved in the puzzle)
81
 
82
+ ## ๐Ÿ› ๏ธ Codebase and Paper
83
 
84
+ For detailed evaluation methodologies, fine-tuning procedures, and in-depth analysis, refer to our [GitHub repository](https://github.com/AlphaPav/mem-kk-logic/) and the accompanying paper:
85
+
86
+ [On Memorization of Large Language Models in Logical Reasoning](https://hf.co/papers/2410.23123)
87
 
 
88
 
89
+ ## โญ Citing our Work
90
 
91
  ```bibtex
92
  @article{xie2024memorization,