Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -84,19 +84,20 @@ The current value of Key1 is Value_N.
|
|
84 |
```
|
85 |
|
86 |
|
87 |
-
##
|
88 |
-
|
|
|
89 |
|
90 |
|
91 |
-
|
92 |
-
|
93 |
|
94 |
|
95 |
|
96 |
## Why this is challenging for LLMs:
|
97 |
- Multiple co-references to the same key cause strong interference.
|
98 |
|
99 |
-
1. As the number of updates per key (N) increases, LLMs confuse earlier values with the most recent one and fail to retrieve the last value. (Dataset column: exp_updates)
|
100 |
2. We intentionally make the task to only retrieve the last value to keep searching difficulties low and to show all LLM are unable to keep track due to **context interference**.
|
101 |
|
102 |
|
|
|
84 |
```
|
85 |
|
86 |
|
87 |
+
## Results:
|
88 |
+
LLMs **cannot reliably retrieve** Value_N. Distribution spans value_1 to value_N, and **as N increases**, the **answers skew** increasingly toward **value_1**.
|
89 |
+
|
90 |
|
91 |
|
92 |
+
## Note on dataset scale:
|
93 |
+
(N from 1 to 400). We put up to 46 such groups (key1..key46) together and then ask the model to retrieve just the last value of each key. We make sure all values are different, so when the model replies, we know how far away the answer is from the correct answer.
|
94 |
|
95 |
|
96 |
|
97 |
## Why this is challenging for LLMs:
|
98 |
- Multiple co-references to the same key cause strong interference.
|
99 |
|
100 |
+
1. As the number of updates per key (N) increases, LLMs **confuse earlier values** with the most recent one and fail to retrieve the last value. (Dataset column: exp_updates)
|
101 |
2. We intentionally make the task to only retrieve the last value to keep searching difficulties low and to show all LLM are unable to keep track due to **context interference**.
|
102 |
|
103 |
|