title
stringlengths 10
147
| abstract
stringlengths 3
2.41k
| tldr_text
stringlengths 96
508
⌀ | content_markdown
stringlengths 0
464k
⌀ | authors
sequencelengths 0
41
⌀ | date
timestamp[ms] | publish_info
stringclasses 115
values | publish_is_top
bool 2
classes | citation_count
uint32 0
1k
| citation_count_filtered_math_and_top_conf
uint32 0
130
| theorem_provers
sequencelengths 1
4
⌀ | url
stringlengths 31
152
⌀ | arxiv_url
stringlengths 32
32
⌀ | semantics_scholar_url
stringlengths 78
78
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
VinePPO: Accurate Credit Assignment in RL for LLM Mathematical Reasoning | Large language models (LLMs) are increasingly required to solve complex reasoning tasks, like mathematical problems, that involve multiple reasoning steps before feedback is received. Effectively identifying and prioritizing key steps by accurately assigning credit to these intermediate steps is essential for enhancing model performance. Proximal Policy Optimization (PPO), a state-of-the-art reinforcement learning algorithm for finetuning LLMs, addresses the credit assignment problem by employing value networks to predict the expected cumulative rewards of intermediate states. In this work, we identify significant limitations with this value estimation method. To address this, we propose \methodname that leverages the flexibility of language environments to compute unbiased Monte Carlo-based estimates of the intermediate values. VinePPO consistently outperforms standard PPO,
doing so more efficiently and with lower divergence from the reference model. Our findings underscore the critical importance of accurate credit assignment in LLM post-training and present a simple, yet effective solution. | null | # VinePPO: Accurate Credit Assignment in RL for LLM Mathematical Reasoning
**Anonymous Author(s)**
Affiliation
Address
```
email
```
**Abstract**
1 Large language models (LLMs) are increasingly required to solve complex rea
2 soning tasks, like mathematical problems, that involve multiple reasoning steps
3 before feedback is received. Effectively identifying and prioritizing key steps by
4 accurately assigning credit to these intermediate steps is essential for enhancing
5 model performance. Proximal Policy Optimization (PPO), a state-of-the-art rein
6 forcement learning algorithm for finetuning LLMs, addresses the credit assignment
7 problem by employing value networks to predict the expected cumulative rewards
8 of intermediate states. In this work, we identify significant limitations with this
9 value estimation method. To address this, we propose VinePPO that leverages
10 the flexibility of language environments to compute unbiased Monte Carlo-based
11 estimates of the intermediate values. VinePPO consistently outperforms standard
12 PPO, doing so more efficiently and with lower divergence from the reference model.
13 Our findings underscore the critical importance of accurate credit assignment in
14 LLM post-training and present a simple, yet effective solution.
15 **1** **Introduction**
16 Large language models (LLMs) are increasingly employed in tasks requiring complex reasoning,
17 such as solving mathematical problems (Trinh et al., 2024; OpenAI, 2024). In these settings, LLMs
18 often engage in extended reasoning chains and perform numerous actions. Prioritizing steps that
19 lead to correct solutions while downplaying the erroneous ones during finetuning is essential for
20 improving the performance and to reduce unnecessary updates that would happen if all steps were
21 treated equally. This is particularly important as most reasoning steps generated by a model often do
22 not impact its likelihood of solving the problem (Fig. 2).
23 This issue is known as the credit assignment problem in reinforcement learning (RL, Sutton and
24 Barto 1998). Proximal Policy Optimization (PPO) (Schulman et al., 2017; Ouyang et al., 2022),
25 the state-of-the-art algorithm for RL tuning of LLMs (Xu et al., 2024; Ivison et al., 2024; Shao
26 et al., 2024), is a variant of actor-critic methods that utilizes a value network (critic) to handle credit
27 assignment (Bai et al., 2022, 2023; Havrilla et al., 2024). The value network is a separate model
28 (the same size as and initialized from a pretrained checkpoint of the LLM) that learns to estimate
29 expected cumulative future reward (value) of intermediate actions during training. PPO then uses
30 the predicted values to measure the advantage of each action and update the model accordingly. For
31 example, in Fig. 2, an ideal value network would assign a low value to s0, where the model initially
32 struggles, and a higher value to s2 and beyond, where a critical action led to solving the problem.
33 Accurately predicting rewards the model will eventually receive solely from a partial and incomplete
34 _response requires the value network to grasp the space of correct solutions and predict the model’s_
35 future behavior both of which are challenging. There are hints in the literature that standard PPO
36 implementations for LLM finetuning have inaccurate value estimations. Ahmadian et al. (2024) and
Submitted to 38th Conference on Neural Information Processing Systems (NeurIPS 2024). Do not distribute.
-----
RhoMath 1.1B DeepSeekMath 7B
45
25.0
22.5
20.0
17.5
40
35
15.0
|RhoMath 1.1B|Col2|Col3|Col4|Col5|Col6|Col7|
|---|---|---|---|---|---|---|
||||||||
|23.0|||||||
||||||||
||||||||
||||||||
|18.1|||||||
|17.3|||||||
|15.5|||||||
||||||||
|DeepSeekMath 7B|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|
|---|---|---|---|---|---|---|---|---|
||||||||||
|46.0|||||||||
||||||||||
|42.9|||||||||
||||||||||
||||||||||
|34.9|||||||||
|32.8|||||||||
||||||||||
Initial SFT RestEM PPO VinePPO
Initial SFT RestEM PPO VinePPO
Figure 1: VinePPO outperforms standard PPO and other baselines on the MATH dataset, while also
exhibiting scalability across different model sizes. The figure shows Pass@1 performance.
37 Luong et al. (2024) demonstrate that value networks often serve best as just a baseline in policy
38 gradient[1]. Shao et al. (2024) shows that the value network can be replaced by averaging rewards of a
39 group of responses to a given problem, without degradation in performance.
40 As estimation errors can significantly hamper model convergence and performance (Sutton et al.,
41 1999; Greensmith et al., 2001), it is crucial to ask: how accurately do value networks perform in
42 _practice during LLM finetuning? While recent studies (Hwang et al., 2024; Setlur et al., 2024) have_
43 begun to highlight the importance of identifying early reasoning errors and incorporating these as
44 training signals in “RL-free” approaches (Rafailov et al., 2023), how accurate credit assignment
45 _can fundamentally enhance RL tuning of LLMs remains an open question. To provide answers, we_
46 evaluate the standard PPO pipeline in mathematical reasoning tasks across various model sizes. Our
47 findings are striking: value networks consistently provide inaccurate estimates.
48 To address this, we propose VinePPO. Instead of relying on value networks, VinePPO computes
49 _unbiased estimates by resetting the environment to intermediate states and performing independent_
50 Monte Carlo (MC) rollouts to calculate the average return of individual steps. This approach takes
51 advantage of a special property of the language environment—the ability to easily reset to any
52 intermediate state of a trajectory (Schulman et al., 2015). Not only does it removes the need for large,
53 memory-intensive value networks, it also outperforms standard PPO and other baselines such as
54 RestEM (Singh et al., 2023)(Fig. 1). The effect of accurate credit assignment is so significant that
55 VinePPO achieves this with lower KL divergence (Fig. H.3) and less wall-clock time (Fig. 4).
56 In summary, we demonstrate that (1) Value networks struggle to estimate intermediate state values.
57 (2) The language environment enables VinePPO to provide unbiased value estimates via MC rollouts.
58 (3) VinePPO, with its accurate credit assignment, outperforms standard PPO. It also achives PPO’s
59 final acurracy by requiring fewer iterations, less wall-clock time, and achieving lower KL divergence
60 from the base model. These findings highlight the importance of accurate credit assignment in RL
61 post-training and position VinePPO as an effective alternative to value networks.
62 **2** **Advantage Estimation with Monte Carlo**
63 We build on PPO (Schulman et al., 2017; Ouyang et al., 2022), for which we provide an extensive
64 background in Appendices C and J. VinePPO only modifies the way advantages are estimated. We
65 start by estimating the true value function V (st). Instead of relying on a value network, for any
66 intermediate state st, we sample K independent trajectories starting from st. The average return
67 across these trajectories serves as the value estimate:
_VˆMC(st) := [1]_
_R(τk),_ where τ1, . . ., τK _π(_ _st)._ (1)
_k=1_ _∼_ _· |_
X
68 where τk is an independent continuation sampled from the model, starting from st and R(·) is
69 the return over the completed trajectory. This is an MC estimate of the value function V (st) =
70 E [R(τ ) | s0 = st]. Once the values _V[ˆ]MC(st) are computed, we compute the advantages with:_
_AˆMC(st, at) := r(st, at) + γV[ˆ]MC(st+1)_ _VMC(st),_ (2)
_−_ [ˆ]
1setting GAE, (Schulman et al., 2016) parameter λ to 1.
-----
_yt_ 1
_yt_
.
_VˆMC(st) =_
1/K _k_ _[R][(][τ][k][)]_
[P]
_τ1[′]_ _τ2[′]_ _[. . .]_ _τK[′]_
Prompt (s0) _pˆ(correct|s:t)_
Let a and b be nonzero real numbers such that
(2 − 7i)(a + bi) is pure imaginary. Find _[a]b_ _[.]_ 0.4
Response
_s1_ We can expand the left-hand side to get 0.4
_s2_ (2 − 7i)(a + bi) = (2a + 7b) + (−7a + 2b)i. **1.0**
_s3_ This is pure imaginary if and only if the real part is 0, i.e. 1.0
_s4_ 2a + 7b = 0. 1.0
_s5_ Then a = − 2[7] _[b,][ so][ a]b_ [=][ −] [7]2 _[.]_ 1.0
Figure 2: (Left) A response generated by the model. The notation ˆp(correct|s:t) represents the
estimated probability of successfully solving the problem at step t, based on nine model rollouts. In
this example, only step s2 is critical; after this, the model completes the solution correctly. (Right)
Illustration of estimating the value of a state within the trajectory.
71 where r(·) is the step-wise reward (in practice, equal to zero unless at final step). Note that for any
72 _K ≥_ 1, the policy gradient computed using the advantage estimator _A[ˆ]MC is an unbiased estimate of_
73 the gradient of expected return.
74 In essence, VinePPO only alters advantage computation in PPO pipeline, leaving the rest unchanged.
75 With this simple modification, we eliminate the need for a value network, significantly reducing
76 memory footprint (up to 112GB for a 7B LLM) while providing unbiased estimates of advantages.
77 The parameter K offers a trade-off between computational cost (i.e. more MC samples per state)
78 and the variance of the estimator. To enhance the efficiency of _A[ˆ]MC, we also group states within_
79 a reasoning step and compute a single advantage, which is then assigned to all tokens in that step.
80 Since everything else in the PPO pipeline of VinePPO is unchanged, by comparing the two methods,
81 we can systematically evaluate of the impact of accurate credit assignment in RL tuning of LLMs.
82 **3** **Experiments**
83 We use two strong base LLMs pretrained for mathematical reasoning: (1) DeepSeekMath 7B (Shao
84 et al., 2024) and (2) RhoMath 1.1B (Lin et al., 2024). Our focus is the MATH dataset (Hendrycks et al.,
85 2021), which contains competition-level problems. We compare three LLM reasoning finetuning
86 strategies, PPO, VinePPO, and RestEM to the supervised finetuned model (SFT) baseline, from which
87 all methods are initialized. We tune PPO hyperparameters like KL penalty coefficient, batch size, and
88 GAE λ, applying best practices in PPO optimization. VinePPO uses the same hyperparameters as
89 PPO but modifies the advantage estimation A(st, at) to isolate the effect of accurate credit assignment.
90 We sample K = 9 trajectories in _V[ˆ]MC. For RestEM, we closely follow the original setup while_
91 ensuring consistency in training conditions for a fair comparison. We choose the best checkpoint
92 based on a held-out validation set for all experiments[2].
93 **4** **Results and Analysis**
94 **Task Performance** As shown in Fig. 1, VinePPO outperforms standard PPO and RestEM. The gap
95 between VinePPO and PPO is consistent throughout the training (Fig. F.1). RestEM lacks explicit
96 credit assignment and finetunes on full trajectories. Despite higher training accuracy, it underperforms
97 on test, likely due to overfitting caused by training on disadvantageous intermediate steps. In addition,
98 fig. 4 represents our ablation on K, observing increasing K consistently improves accuracy.
99 **KL Divergence** The RL objective[3] aims to balance maximizing task performance while limiting
100 deviations from the reference policy π0, or original SFT, as measured by KL divergence. We track
2Refer to Appendix E for full details.
3The full definition is in Appendix C.
-----
PPO @ Step: 360 PPO @ Step: 960 VinePPO @ Step: 360 VinePPO @ Step: 960
1.00 1.00 1.00 1.00
0.75 0.75 0.75 0.75
0.50 0.50 0.50 0.50
0.25 0.25 0.25 0.25
Predicted Value 0.00 0.00 0.00 0.00
0.00 0.25 0.50 0.75 1.00 0.00 0.25 0.50 0.75 1.00 0.00 0.25 0.50 0.75 1.00 0.00 0.25 0.50 0.75 1.00
Ground Truth Value
Figure 3: Distribution of predicted values for each state vs. ground truth (computed using 256 MC
samples) for DeepSeekMath 7B on MATH, highlighting the nature of errors in PPO’s value estimates.
|MATH|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|
|---|---|---|---|---|---|---|---|---|
||||||||||
|Varying Compute|||||||||
||||||||||
|23.0|||||||||
|21.2|||||||||
|19.9|||||||||
|18.1|||||||||
||||||||||
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|
|---|---|---|---|---|---|---|
||||||||
||||3.7x Fast|er|||
||||||||
|Col1|RhoMath 1.1B 24% Method 20% Accuracy VinePPO 3.7x Faster PPO 16% 0 5 10 15 20 25 Wall Clock (Hours)|
|---|---|
|MATH 28% Varying Compute 25% ) 23.0 22% ( 21.2 Accuracy 19.9 20% 18.1 18% 15% PPO VinePPO VinePPO VinePPO (K = 1) (K = 3) (K = 9)||
|||
Figure 4: (Left) Impact of the number of sampled trajectories K when estimating _V[ˆ]MC(st), evaluated_
on RhoMath 1.1B models. We observe that increasing K improves task performance consistently.
**(Right) Accuracy per wall clock time for both methods. Although VinePPO spend more time in each**
iteration, it achieves PPO’s peak performance in fewer iteration and wall clock time.
101 the KL divergence KL[πθ _π0] throughout training for both methods and plot task accuracy against_
_∥_
102 KL to assess this balance in Fig. H.3. The results show that VinePPO consistently achieves higher
103 accuracy for a given KL divergence.
104 **Computational efficiency** VinePPO and standard PPO need different kinds of resources. The
105 value network needs to be trained and alongisde its optimizer consuming more GPU memory. In
106 contrast, MC rollouts need fast inferences and as a result VinePPO is generally slower per iteration
107 compared to PPO. In our setup, RhoMath 1.1B and DeepSeekMath 7B are 5x and 2x slower per
108 iteration when using VinePPO . However, as shown in Fig. 4, the impact of accurate credit assignment
109 with VinePPO is substantial. VinePPO reaches the final accuracy of PPO in fewer iterations and less
110 time. Specifically, RhoMath 1.1B and DeepSeekMath 7B achieve PPO’s final test accuracy 3.7× and
111 2.3× faster in wall-clock time, and in 20× and 5× fewer gradient steps, respectively.[4]
112 **Value Prediction Accuracy** To analyze the accuracy of value prediction, we compute the ground
113 truth value of each state by taking 256 MC samples. We compare value network (from PPO)
114 predictions against VinePPO’s. As shown in Fig. 3, VinePPO and PPO produce errors of very
115 different types. VinePPO estimates are unbiased, with variance peaking at 0.5 and dropping to zero at
116 0 and 1. In contrast, the value network’s estimates exhibit high bias. See Appendix I for full details.
117 **5** **Conclusion**
118 Credit assignment is a weak spot for current RL finetuning of LLMs. While value networks are
119 tasked and trained to estimate these values, they perform poorly. VinePPO simply replaces the value
120 networks with MC samples. We found that it reaches higher accuracy faster supporting the significant
121 impact that accurate credit assignment has on RL finetuning of LLMs for reasoning. We hope our
122 work encourages researchers to look into the details of RL finetuning pipelines of LLMs and to
123 explore more computationally practical methods for accurate credit assignment.
4Note that this is despite the fact that all of hyperparameter searches were tuned for PPO.
-----
124 **References**
125 Arash Ahmadian, Chris Cremer, Matthias Gallé, Marzieh Fadaee, Julia Kreutzer, Olivier Pietquin,
126 [Ahmet Üstün, and Sara Hooker. 2024. Back to Basics: Revisiting REINFORCE Style Optimization](https://doi.org/10.48550/ARXIV.2402.14740)
127 [for Learning from Human Feedback in LLMs. CoRR, abs/2402.14740.](https://doi.org/10.48550/ARXIV.2402.14740)
128 [Thomas Anthony, Zheng Tian, and David Barber. 2017. Thinking Fast and Slow with Deep Learning](http://arxiv.org/abs/1705.08439)
129 [and Tree Search. CoRR, abs/1705.08439.](http://arxiv.org/abs/1705.08439)
130 Mohammad Gheshlaghi Azar, Mark Rowland, Bilal Piot, Daniel Guo, Daniele Calandriello, Michal
131 [Valko, and Rémi Munos. 2023. A General Theoretical Paradigm to Understand Learning from](https://doi.org/10.48550/ARXIV.2310.12036)
132 [Human Preferences. CoRR, abs/2310.12036.](https://doi.org/10.48550/ARXIV.2310.12036)
133 Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge,
134 Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu,
135 Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan,
136 Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin
137 Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng
138 Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren
139 [Zhou, Xiaohuan Zhou, and Tianhang Zhu. 2023. Qwen Technical Report. CoRR, abs/2309.16609.](https://doi.org/10.48550/ARXIV.2309.16609)
140 Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn
141 Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, Nicholas Joseph, Saurav Kadavath, Jackson
142 Kernion, Tom Conerly, Sheer El Showk, Nelson Elhage, Zac Hatfield-Dodds, Danny Hernandez,
143 Tristan Hume, Scott Johnston, Shauna Kravec, Liane Lovitt, Neel Nanda, Catherine Olsson, Dario
144 Amodei, Tom B. Brown, Jack Clark, Sam McCandlish, Chris Olah, Benjamin Mann, and Jared
145 [Kaplan. 2022. Training a Helpful and Harmless Assistant with Reinforcement Learning from](https://doi.org/10.48550/ARXIV.2204.05862)
146 [Human Feedback. CoRR, abs/2204.05862.](https://doi.org/10.48550/ARXIV.2204.05862)
147 Dan Biderman, Jose Javier Gonzalez Ortiz, Jacob Portes, Mansheej Paul, Philip Greengard, Connor
148 Jennings, Daniel King, Sam Havens, Vitaliy Chiley, Jonathan Frankle, Cody Blakeney, and John P.
149 [Cunningham. 2024. LoRA Learns Less and Forgets Less. CoRR, abs/2405.09673.](https://doi.org/10.48550/ARXIV.2405.09673)
150 [Guoxin Chen, Minpeng Liao, Chengxi Li, and Kai Fan. 2024. AlphaMath Almost Zero: process](https://doi.org/10.48550/ARXIV.2405.03553)
151 [Supervision without process. CoRR, abs/2405.03553.](https://doi.org/10.48550/ARXIV.2405.03553)
152 [Kawin Ethayarajh, Winnie Xu, Niklas Muennighoff, Dan Jurafsky, and Douwe Kiela. 2024. KTO:](https://doi.org/10.48550/ARXIV.2402.01306)
153 [Model Alignment as Prospect Theoretic Optimization. CoRR, abs/2402.01306.](https://doi.org/10.48550/ARXIV.2402.01306)
154 [Xidong Feng, Ziyu Wan, Muning Wen, Ying Wen, Weinan Zhang, and Jun Wang. 2023. Alphazero-](https://doi.org/10.48550/ARXIV.2309.17179)
155 [like Tree-search can Guide Large Language Model Decoding and Training. CoRR, abs/2309.17179.](https://doi.org/10.48550/ARXIV.2309.17179)
156 [Evan Greensmith, Peter L. Bartlett, and Jonathan Baxter. 2001. Variance Reduction Techniques for](https://proceedings.neurips.cc/paper/2001/hash/584b98aac2dddf59ee2cf19ca4ccb75e-Abstract.html)
157 [Gradient Estimates in Reinforcement Learning. In Advances in Neural Information Processing](https://proceedings.neurips.cc/paper/2001/hash/584b98aac2dddf59ee2cf19ca4ccb75e-Abstract.html)
158 _Systems 14 [Neural Information Processing Systems: Natural and Synthetic, NIPS 2001, December_
159 _3-8, 2001, Vancouver, British Columbia, Canada], pages 1507–1514. MIT Press._
160 [Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. 2018.](http://arxiv.org/abs/1801.01290) [Soft Actor-Critic:](http://arxiv.org/abs/1801.01290)
161 [Off-policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor. CoRR,](http://arxiv.org/abs/1801.01290)
162 abs/1801.01290.
163 Shibo Hao, Yi Gu, Haodi Ma, Joshua Jiahua Hong, Zhen Wang, Daisy Zhe Wang, and Zhiting Hu.
164 [2023. Reasoning with Language Model is Planning with World Model. CoRR, abs/2305.14992.](https://doi.org/10.48550/ARXIV.2305.14992)
165 Alex Havrilla, Yuqing Du, Sharath Chandra Raparthy, Christoforos Nalmpantis, Jane Dwivedi-Yu,
166 [Maksym Zhuravinskyi, Eric Hambro, Sainbayar Sukhbaatar, and Roberta Raileanu. 2024. Teaching](https://doi.org/10.48550/ARXIV.2403.04642)
167 [Large Language Models to Reason with Reinforcement Learning. CoRR, abs/2403.04642.](https://doi.org/10.48550/ARXIV.2403.04642)
168 Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song,
169 [and Jacob Steinhardt. 2021. Measuring Mathematical Problem Solving With the MATH Dataset.](http://arxiv.org/abs/2103.03874)
170 _CoRR, abs/2103.03874._
171 Arian Hosseini, Xingdi Yuan, Nikolay Malkin, Aaron C. Courville, Alessandro Sordoni, and Rishabh
172 [Agarwal. 2024. V-STaR: Training Verifiers for Self-taught Reasoners. CoRR, abs/2402.06457.](https://doi.org/10.48550/ARXIV.2402.06457)
-----
173 Shengyi Huang, Michael Noukhovitch, Arian Hosseini, Kashif Rasul, Weixun Wang, and Lewis
174 [Tunstall. 2024. The N+ Implementation Details of RLHF with PPO: A Case Study on TL;DR](https://doi.org/10.48550/ARXIV.2403.17031)
175 [Summarization. CoRR, abs/2403.17031.](https://doi.org/10.48550/ARXIV.2403.17031)
176 [Hyeonbin Hwang, Doyoung Kim, Seungone Kim, Seonghyeon Ye, and Minjoon Seo. 2024. Self-](https://doi.org/10.48550/ARXIV.2404.10346)
177 [explore to Avoid the Pit: Improving the Reasoning Capabilities of Language Models with Fine-](https://doi.org/10.48550/ARXIV.2404.10346)
178 [grained Rewards. CoRR, abs/2404.10346.](https://doi.org/10.48550/ARXIV.2404.10346)
179 Hamish Ivison, Yizhong Wang, Jiacheng Liu, Zeqiu Wu, Valentina Pyatkin, Nathan Lambert, Noah A.
180 [Smith, Yejin Choi, and Hannaneh Hajishirzi. 2024. Unpacking DPO and PPO: Disentangling Best](https://doi.org/10.48550/ARXIV.2406.09279)
181 [Practices for Learning from Preference Feedback. CoRR, abs/2406.09279.](https://doi.org/10.48550/ARXIV.2406.09279)
182 Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E.
183 [Gonzalez, Hao Zhang, and Ion Stoica. 2023. Efficient Memory Management for Large Language](https://doi.org/10.48550/ARXIV.2309.06180)
184 [Model Serving with PagedAttention. CoRR, abs/2309.06180.](https://doi.org/10.48550/ARXIV.2309.06180)
185 Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay V.
186 Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, Yuhuai Wu, Behnam
187 [Neyshabur, Guy Gur-Ari, and Vedant Misra. 2022. Solving Quantitative Reasoning Problems with](https://doi.org/10.48550/ARXIV.2206.14858)
188 [Language Models. CoRR, abs/2206.14858.](https://doi.org/10.48550/ARXIV.2206.14858)
189 Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan
190 [Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. 2023a. Let’s Verify Step by Step. CoRR,](https://doi.org/10.48550/ARXIV.2305.20050)
191 abs/2305.20050.
192 Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan
193 [Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. 2023b. Let’s Verify Step by Step. CoRR,](https://doi.org/10.48550/ARXIV.2305.20050)
194 abs/2305.20050.
195 Zhenghao Lin, Zhibin Gou, Yeyun Gong, Xiao Liu, Yelong Shen, Ruochen Xu, Chen Lin, Yujiu
196 [Yang, Jian Jiao, Nan Duan, and Weizhu Chen. 2024. Rho-1: Not All Tokens Are What You Need.](https://doi.org/10.48550/ARXIV.2404.07965)
197 _CoRR, abs/2404.07965._
198 Liangchen Luo, Yinxiao Liu, Rosanne Liu, Samrat Phatale, Harsh Lara, Yunxuan Li, Lei Shu, Yun
199 [Zhu, Lei Meng, Jiao Sun, and Abhinav Rastogi. 2024. Improve Mathematical Reasoning in](https://doi.org/10.48550/ARXIV.2406.06592)
200 [Language Models by Automated Process Supervision. CoRR, abs/2406.06592.](https://doi.org/10.48550/ARXIV.2406.06592)
201 [Trung Quoc Luong, Xinbo Zhang, Zhanming Jie, Peng Sun, Xiaoran Jin, and Hang Li. 2024. ReFT:](https://doi.org/10.48550/ARXIV.2401.08967)
202 [Reasoning with Reinforced Fine-tuning. CoRR, abs/2401.08967.](https://doi.org/10.48550/ARXIV.2401.08967)
203 Qianli Ma, Haotian Zhou, Tingkai Liu, Jianbo Yuan, Pengfei Liu, Yang You, and Hongxia Yang.
204 [2023. Let’s reward step by step: Step-level reward model as the Navigators for Reasoning. CoRR,](https://doi.org/10.48550/ARXIV.2310.10080)
205 abs/2310.10080.
206 Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap,
207 [Tim Harley, David Silver, and Koray Kavukcuoglu. 2016. Asynchronous Methods for Deep](http://arxiv.org/abs/1602.01783)
208 [Reinforcement Learning. CoRR, abs/1602.01783.](http://arxiv.org/abs/1602.01783)
209 Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan
210 [Wierstra, and Martin A. Riedmiller. 2013. Playing Atari with Deep Reinforcement Learning.](http://arxiv.org/abs/1312.5602)
211 _CoRR, abs/1312.5602._
212 [OpenAI. 2024. OpenAI o1 System Card.](https://api.semanticscholar.org/CorpusID:272684752)
213 Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong
214 Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton,
215 Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F. Christiano, Jan Leike, and
216 [Ryan Lowe. 2022. Training language models to follow instructions with human feedback. CoRR,](https://doi.org/10.48550/ARXIV.2203.02155)
217 abs/2203.02155.
218 [Qwen. 2024. Qwen2.5-Math: The world’s leading open-sourced mathematical LLMs. https:](https://qwenlm.github.io/blog/qwen2.5-math/)
219 `//qwenlm.github.io/blog/qwen2.5-math/. Accessed: 2024-09-23.`
-----
220 Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D. Manning, and Chelsea
221 [Finn. 2023. Direct Preference Optimization: Your Language Model is Secretly a Reward Model.](https://doi.org/10.48550/ARXIV.2305.18290)
222 _CoRR, abs/2305.18290._
223 [John Schulman. 2020. Notes on the KL-divergence Approximation. http://joschu.net/blog/](http://joschu.net/blog/kl-approx.html)
224 `kl-approx.html. Accessed: 2024-09-23.`
225 [John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, and Pieter Abbeel. 2015. Trust](http://arxiv.org/abs/1502.05477)
226 [Region Policy Optimization. CoRR, abs/1502.05477.](http://arxiv.org/abs/1502.05477)
227 [John Schulman, Philipp Moritz, Sergey Levine, Michael I. Jordan, and Pieter Abbeel. 2016. High-](http://arxiv.org/abs/1506.02438)
228 [dimensional Continuous Control Using Generalized Advantage Estimation. In 4th International](http://arxiv.org/abs/1506.02438)
229 _Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016,_
230 _Conference Track Proceedings._
231 [John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017. Proximal](http://arxiv.org/abs/1707.06347)
232 [Policy Optimization Algorithms. CoRR, abs/1707.06347.](http://arxiv.org/abs/1707.06347)
233 Amrith Setlur, Saurabh Garg, Xinyang Geng, Naman Garg, Virginia Smith, and Aviral Kumar. 2024.
234 [RL on Incorrect Synthetic Data Scales the Efficiency of LLM Math Reasoning by Eight-fold.](https://doi.org/10.48550/ARXIV.2406.14532)
235 _CoRR, abs/2406.14532._
236 Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Mingchuan Zhang, Y. K. Li,
237 [Y. Wu, and Daya Guo. 2024. DeepSeekMath: Pushing the Limits of Mathematical Reasoning in](https://doi.org/10.48550/ARXIV.2402.03300)
238 [Open Language Models. CoRR, abs/2402.03300.](https://doi.org/10.48550/ARXIV.2402.03300)
239 David Silver, Aja Huang, Chris J. Maddison, Arthur Guez, Laurent Sifre, George van den Driess
240 che, Julian Schrittwieser, Ioannis Antonoglou, Vedavyas Panneershelvam, Marc Lanctot, Sander
241 Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy P. Lillicrap,
242 [Madeleine Leach, Koray Kavukcuoglu, Thore Graepel, and Demis Hassabis. 2016. Mastering the](https://doi.org/10.1038/NATURE16961)
243 [game of Go with deep neural networks and tree search. Nat., 529(7587):484–489.](https://doi.org/10.1038/NATURE16961)
244 David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez,
245 Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, Timothy P. Lillicrap, Karen
246 [Simonyan, and Demis Hassabis. 2017. Mastering Chess and Shogi by Self-play with a General](http://arxiv.org/abs/1712.01815)
247 [Reinforcement Learning Algorithm. CoRR, abs/1712.01815.](http://arxiv.org/abs/1712.01815)
248 Avi Singh, John D. Co-Reyes, Rishabh Agarwal, Ankesh Anand, Piyush Patil, Xavier Garcia, Peter J.
249 Liu, James Harrison, Jaehoon Lee, Kelvin Xu, Aaron Parisi, Abhishek Kumar, Alex Alemi, Alex
250 Rizkowsky, Azade Nova, Ben Adlam, Bernd Bohnet, Gamaleldin F. Elsayed, Hanie Sedghi, Igor
251 Mordatch, Isabelle Simpson, Izzeddin Gur, Jasper Snoek, Jeffrey Pennington, Jiri Hron, Kathleen
252 Kenealy, Kevin Swersky, Kshiteej Mahajan, Laura Culp, Lechao Xiao, Maxwell L. Bileschi, Noah
253 Constant, Roman Novak, Rosanne Liu, Tris Warkentin, Yundi Qian, Yamini Bansal, Ethan Dyer,
254 [Behnam Neyshabur, Jascha Sohl-Dickstein, and Noah Fiedel. 2023. Beyond Human Data: Scaling](https://doi.org/10.48550/ARXIV.2312.06585)
255 [Self-training for Problem-solving with Language Models. CoRR, abs/2312.06585.](https://doi.org/10.48550/ARXIV.2312.06585)
256 [Xianghui Sun, Yunjie Ji, Baochang Ma, and Xiangang Li. 2023. A Comparative Study between](https://doi.org/10.48550/ARXIV.2304.08109)
257 [Full-parameter and LoRA-based Fine-tuning on Chinese Instruction Data for Instruction Following](https://doi.org/10.48550/ARXIV.2304.08109)
258 [Large Language Model. CoRR, abs/2304.08109.](https://doi.org/10.48550/ARXIV.2304.08109)
259 [Richard S. Sutton and Andrew G. Barto. 1998. Introduction to Reinforcement Learning. In Introduc-](https://api.semanticscholar.org/CorpusID:261579713)
260 _tion to Reinforcement Learning._
261 [Richard S. Sutton, David A. McAllester, Satinder Singh, and Yishay Mansour. 1999. Policy Gradient](http://papers.nips.cc/paper/1713-policy-gradient-methods-for-reinforcement-learning-with-function-approximation)
262 [Methods for Reinforcement Learning with Function Approximation. In Advances in Neural](http://papers.nips.cc/paper/1713-policy-gradient-methods-for-reinforcement-learning-with-function-approximation)
263 _Information Processing Systems 12, [NIPS Conference, Denver, Colorado, USA, November 29 -_
264 _December 4, 1999], pages 1057–1063. The MIT Press._
265 Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay
266 Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian
267 Canton-Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin
268 Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar
269 [Hosseini, Rui Hou, Hakan Inan, et al. 2023. Llama 2: Open Foundation and Fine-tuned Chat](https://doi.org/10.48550/ARXIV.2307.09288)
270 [Models. CoRR, abs/2307.09288.](https://doi.org/10.48550/ARXIV.2307.09288)
-----
271 Mark Towers, Ariel Kwiatkowski, Jordan Terry, John U Balis, Gianluca De Cola, Tristan Deleu,
272 Manuel Goulão, Andreas Kallinteris, Markus Krimmel, Arjun KG, et al. 2024. Gymnasium: A
273 standard interface for reinforcement learning environments. arXiv preprint arXiv:2407.17032.
274 [Trieu H. Trinh, Yuhuai Wu, Quoc V. Le, He He, and Thang Luong. 2024. Solving olympiad geometry](https://doi.org/10.1038/S41586-023-06747-5)
275 [without human demonstrations. Nat., 625(7995):476–482.](https://doi.org/10.1038/S41586-023-06747-5)
276 Jonathan Uesato, Nate Kushman, Ramana Kumar, H. Francis Song, Noah Y. Siegel, Lisa Wang,
277 [Antonia Creswell, Geoffrey Irving, and Irina Higgins. 2022. Solving math word problems with](https://doi.org/10.48550/ARXIV.2211.14275)
278 [process- and outcome-based feedback. CoRR, abs/2211.14275.](https://doi.org/10.48550/ARXIV.2211.14275)
279 Peiyi Wang, Lei Li, Zhihong Shao, RX Xu, Damai Dai, Yifei Li, Deli Chen, Y Wu, and Zhifang Sui.
280 2023. Math-shepherd: A label-free step-by-step verifier for llms in mathematical reasoning. arXiv
281 _preprint arXiv:2312.08935._
282 Ziyu Wang, Nando de Freitas, and Marc Lanctot. 2015. Dueling network architectures for deep
283 reinforcement learning. CoRR abs/1511.06581 (2015). arXiv preprint arXiv:1511.06581.
284 Yuxi Xie, Anirudh Goyal, Wenyue Zheng, Min-Yen Kan, Timothy P. Lillicrap, Kenji Kawaguchi,
285 [and Michael Shieh. 2024. Monte Carlo Tree Search Boosts Reasoning via Iterative Preference](https://doi.org/10.48550/ARXIV.2405.00451)
286 [Learning. CoRR, abs/2405.00451.](https://doi.org/10.48550/ARXIV.2405.00451)
287 Shusheng Xu, Wei Fu, Jiaxuan Gao, Wenjie Ye, Weilin Liu, Zhiyu Mei, Guangju Wang, Chao Yu,
288 [and Yi Wu. 2024. Is DPO Superior to PPO for LLM Alignment? A Comprehensive Study. CoRR,](https://doi.org/10.48550/ARXIV.2404.10719)
289 abs/2404.10719.
290 Zheng Yuan, Hongyi Yuan, Chengpeng Li, Guanting Dong, Chuanqi Tan, and Chang Zhou. 2023.
291 [Scaling Relationship on Learning Mathematical Reasoning with Large Language Models. CoRR,](https://doi.org/10.48550/ARXIV.2308.01825)
292 abs/2308.01825.
293 [Dan Zhang, Sining Zhoubian, Yisong Yue, Yuxiao Dong, and Jie Tang. 2024. ReST-MCTS*: LLM](https://doi.org/10.48550/ARXIV.2406.03816)
294 [Self-training via Process Reward Guided Tree Search. CoRR, abs/2406.03816.](https://doi.org/10.48550/ARXIV.2406.03816)
295 Lianmin Zheng, Liangsheng Yin, Zhiqiang Xie, Chuyue Sun, Jeff Huang, Cody Hao Yu, Shiyi Cao,
296 [Christos Kozyrakis, Ion Stoica, Joseph E. Gonzalez, Clark Barrett, and Ying Sheng. 2024. SGLang:](http://arxiv.org/abs/2312.07104)
297 [Efficient Execution of Structured Language Model Programs.](http://arxiv.org/abs/2312.07104)
298 Rui Zheng, Shihan Dou, Songyang Gao, Yuan Hua, Wei Shen, Binghai Wang, Yan Liu, Senjie Jin,
299 Qin Liu, Yuhao Zhou, Limao Xiong, Lu Chen, Zhiheng Xi, Nuo Xu, Wenbin Lai, Minghao Zhu,
300 Cheng Chang, Zhangyue Yin, Rongxiang Weng, Wensen Cheng, Haoran Huang, Tianxiang Sun,
301 [Hang Yan, Tao Gui, Qi Zhang, Xipeng Qiu, and Xuanjing Huang. 2023. Secrets of RLHF in Large](https://doi.org/10.48550/ARXIV.2307.04964)
302 [Language Models Part I: PPO. CoRR, abs/2307.04964.](https://doi.org/10.48550/ARXIV.2307.04964)
303 Daniel M. Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B. Brown, Alec Radford, Dario Amodei, Paul F.
304 [Christiano, and Geoffrey Irving. 2019a. Fine-tuning Language Models from Human Preferences.](http://arxiv.org/abs/1909.08593)
305 _CoRR, abs/1909.08593._
306 Daniel M. Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B. Brown, Alec Radford, Dario Amodei, Paul F.
307 [Christiano, and Geoffrey Irving. 2019b. Fine-tuning Language Models from Human Preferences.](http://arxiv.org/abs/1909.08593)
308 _CoRR, abs/1909.08593._
-----
309 **A** **Related Work**
310 **Credit Assignment in Post-Training of LLM** PPO (Schulman et al., 2017), as applied in Rein
311 forcement Learning from Human Feedback (RLHF, Ouyang et al. 2022), was among the pioneering
312 approaches for RL finetuning of LLMs. While effective, PPO is known for its computational overhead
313 and sensitivity to hyperparameters. As a result, subsequent approaches have sought to simplify or
314 bypass PPO without sacrificing performance. For example, RL-free methods such as DPO (Rafailov
315 et al., 2023) and its newer variants (Azar et al., 2023; Ethayarajh et al., 2024) operate in a bandit
316 setting, where the entire response is treated as a single action, without distinguishing intermediate
317 states. Similarly, methods based on rejection sampling, like RestEM (Singh et al., 2023), finetune
318 the model on full high-reward responses. In the realm of PPO simplifications, methods like RLOO
319 (Ahmadian et al., 2024) and GRPO (Shao et al., 2024) abandon the value network of PPO. They
320 sample a group of M responses per each prompt and compute the average reward (of other M − 1
321 responses) as a policy gradient baseline for all tokens in the group, effectively treating the entire
322 response as a single action. Recent works, however, have started to emphasize the importance of
323 finer credit assignment. Work such as Hwang et al. (2024) and Setlur et al. (2024) introduce Monte
324 Carlo-based mechanisms that detect key errors in reasoning chains and apply use them negative
325 sample in DPO. Unlike these approaches, which rely on ad-hoc heuristics, our work fully embraces
326 RL training pipeline and addresses the core issue of inaccurate value estimation in PPO to unlock
327 its full potential. In parallel, there has been interest (Hosseini et al., 2024; Lightman et al., 2023a)
328 in building better verifiers and reward models that can provide per-step feedback. Although these
329 methods often require costly human annotation, recent efforts (Ma et al., 2023; Uesato et al., 2022;
330 Luo et al., 2024; Wang et al., 2023) have automated data collection using MC rollouts. VinePPO is
331 orthogonal to these approaches, as it operates within PPO-based training, optimizing a given task’s
332 reward rather than designing new reward models. Our method can further benefit from improvements
333 in reward modeling as they emerge.
334 **Value Estimation in RL and Monte Carlo Tree Search** Deep RL algorithms are categorized into
335 value-based and policy-based methods. Value-based algorithms, such as DQN and its successors
336 (Mnih et al., 2013; Wang et al., 2015), train a neural network to predict values and derive the policy
337 from the learned value function. Policy-based methods, including A2C, A3C (Mnih et al., 2016),
338 SAC (Haarnoja et al., 2018), and PPO (Schulman et al., 2017), train a policy directly and use value
339 estimates only to guide the policy updates. Typically, these methods rely on critic networks for value
340 prediction. An exception is a variant of TRPO (Schulman et al., 2015), known as the “Vine” variant,
341 where state value estimation is performed using MC samples. However, the authors note that the Vine
342 variant is limited to environments that allow easy resets to any state, which is uncommon in most
343 RL settings as the focus is on black-box engines or real-world deployment. In contrast to common
344 RL environments, language generation, allows for easy resets to any intermediate state, presenting
345 unique opportunities for RL tunning of LLM. In fact, when easy resets were available in RL (e.g., Go,
346 Chess), strong MC-based methods like AlphaGo (Silver et al., 2016) and AlphaZero (Silver et al.,
347 2017) have emerged. AlphaGo trains a policy using expert moves data and self-play, alongside a
348 value network to predict the win probability from a given state. Then during the inference, it applies a
349 tree search guided by MC rollouts and the value network to find the best possible moves. AlphaZero
350 advances this approach by distilling MCTS outcomes into its policy, removing the need for expert
351 data. Recent works have adapted AlphaZero’s principles and lessons to LLM, using similar search
352 techniques during inference to improve responses and during training to find better trajectories for
353 distillation (Xie et al., 2024; Chen et al., 2024; Feng et al., 2023; Zhang et al., 2024; Hao et al., 2023).
354 While this is a promising direction, VinePPO is not an MCTS method; it rather utilizes MC samples
355 solely for value estimation and only during PPO training to improving credit assignment. In fact,
356 inference-time search like MCTS can be layered on top of VinePPO to further enhance performance.
357 **B** **Limitations**
358 In this work, we focused on complex mathematical reasoning tasks, which provide a clear testbed for
359 evaluating the impact of accurate credit assignment. While VinePPO is a general-purpose modification
360 to PPO for LLM finetuning, its performance on more general human alignment tasks remains unclear.
361 It is plausible that the performance gap between VinePPO and PPO would be less pronounced on
-----
362 tasks where the value network can generalize more easily. For example, in tasks like detecting toxicity
363 in partial responses, the value network may perform well, reducing the advantage VinePPO offers.
364 **C** **Background**
365 We focus on the RL tuning phase in the RLHF pipeline, following Ziegler et al. (2019a); Ouyang
366 et al. (2022); Shao et al. (2024). In this section, we provide an overview of actor-critic finetuning as
367 implemented in PPO.
368 **RL Finetuning** In this setup, the policy πθ represents a language model that generates a response
369370 **_y = [T −1y0, . . ., yT −1] autoregressively given an input x = [x0, . . ., xM_** _−1], such that πθ(y|x) =_
_t=0_ _[π][θ][(][y][t][|][x][;][ y][<t][)][.][ The goal of RL finetuning is to maximize the expected undiscounted (][γ][ = 1][)]_
371 finite-horizon return, while incorporating a KL-divergence constraint to regularize the policy and
372 Qprevent it from deviating too far from a reference policy π0 (typically the initial supervised finetuned
373 (SFT) model). The objective can be written as:
_J(θ) = Ex_ _,y_ _π(_ **_x) [_** (x; y)] _β KL[πθ_ _π0],_ (3)
_∼D_ _∼_ _·|_ _R_ _−_ _∥_
374 where D is the dataset of prompts, R(x; y) is the complete sequence-level reward function, and β
375 controls the strength of the KL penalty. Note that the policy πθ is initialized from π0.
376 **Language Environment as an MDP** The language generation is typically modeled as a token-level
377 Markov Decision Process (MDP) in an actor-critic setting, where each response y is an episode.
378 Specifically, the state at time step t, st, is the concatenation of the input prompt and the tokens
379 generated up to that point: st = x; y ∈S<t = [x0, . . ., xM 1, y0, . . ., yt 1]. At each time step, the
_−_ _−_
380 action at corresponds to generating the next token yt from fixed vocabulary. The process begins
381 with the initial state s0 = x, and after each action, the environment transitions to the next state,
382 _st+1 = st; [at], by appending the action at to the current state st. In this case, since states are always_
383 constructed by concatenating tokens, the environment dynamics are known and the transition function
384 is deterministic, i.e., P (st+1|st, at) = 1. During the generation process, the reward rt is set to zero
385 for all intermediate actions at’s, with the sequence-level reward (x; y) only applied at the final
_R_
386 step when the model stops generating. A trajectory τ = (s0, a0, s1, a1, . . . ) is therefore a sequence
387 of state-action pairs, starting from the input prompt until the terminal state. Finally, we define the
388 cumulative return of a trajectory τ as R(τ ) = _t=0_ _[r][t][ =][ R][(][s][T][ ) =][ R][(][x][;][ y][)][.]_
389 **Policy Gradient** Given this MDP formulation, policy gradient methods like PPO maximize[P][T][ −][1] Eq. 3
390 by repeatedly sampling trajectories and taking a step in the direction of the gradient gpg := ∇θJ(θ)
391 to update the parameters. Policy gradient gpg takes the following form:
_T −1_
" _t=0_ _∇θ log πθ(at|st)A(st, at)_
X
**_gpg = Eτ_** _πθ_
_∼_
where st = x; y<t, at = yt, (4)
392 _τ = (s0, a0, . . .), and A(st, at) is the advantage function. The gradient gpg points towards increasing_
393 the probability πθ(at _st) when A(st, at) > 0 and the opposite when A(st, at) < 0. Intuitively, the_
394 advantage function A |(st, at) quantifies how much better taking action at at state st is compared to
395 the average action taken in that state under the policy. Formally, it is defined as:
_A(st, at) = Q(st, at) −_ _V (st) = rt + γV (st+1) −_ _V (st),_ (5)
396 where Q(st, at) is the state-action value and V (st) is the per-state value function[5]. The value
397 function, V (st) : S → R, offers a long-term assessment of how desirable a particular state is under
398 the current policy. Formally, it represents the expected cumulative reward obtained from starting in
399 state st and following the policy thereafter[6]: V (st) = Eτ _πθ [R(τ_ ) _s0 = st] . PPO uses the same_
_∼_ _|_
400 advantage-weighted policy gradient as in Eq. 4, but constrains policy updates through clipping to
401 ensure stable training. For full details, see Appendix J.
5Such derivation is possible as the language environment is deterministic.
6We drop the dependency on πθ for brevity.
-----
402 **Estimating the Advantage via Value Networks** In practice, the advantage function A(st, at) is
403 not known a priori and commonly estimated by first using a value network _V[ˆ]ϕ to approximate the_
404 _true value function V (st) and then plugging the learned values into Eq. 5 or other variants such as_
405 GAE (Schulman et al., 2016). The value network is parameterized by ϕ and trained alongside the
406 policy network πθ. The training objective for the value network minimizes the mean squared error
407 between the predicted value and the empirical return:
1
_Vϕ(st)_ _Gt)[2]_
2 [( ˆ] _−_
_V (ϕ) = Eτ_ _πθ_
_L_ _∼_
(6)
408 where Gt = _t[′]=t_ _[r][t][′][ is the empirical return from state][ s][t][. PPO uses the same objective for][ ˆ]Vϕ but_
409 enhances stability by applying clipping during training (see Appendix J.1 for details). In RL-tuning
410 of LLMs, the value network is often initialized using the initial policy π0 (or the reward model when
[P][T][ −][1]
411 available), with the language modeling head swapped out for a scalar output head to predict values
412 (Zheng et al., 2023). This setup leverages the prior knowledge of the pretrained model for value
413 estimation.
414 **D** **Accurate Credit Assignment with VinePPO**
415 As outlined in Appendix C, a step in the PPO gradient update (Eq. 4) aims to increase the probability
416 of better-than-average actions while decreasing the probability of those that perform worse—a
417 process quantified by advantage function A(st, at). However, the true advantage function is generally
418 unknown and must be estimated, typically by substituting estimates from a value network into Eq. 5.
419 As we will elaborate in Appendix I, neural networks are imperfect function approximators and can
420 result in biased value estimates. Fortunately, the language environment offers a useful property that
421 allows for deriving an unbiased estimator of value function V (st). In this section, we first describe
422 this property and then explain how VinePPO leverages it to enhance credit assignment.
423 **D.1** **Language Environment**
424 The language environment, as defined in Appendix C, possesses a unique property not commonly
425 found in traditional RL settings: the ability to reset to any point within a trajectory. Since states are
426 simply concatenated tokens, we can prompt the language model πθ to generate continuations from any
427 intermediate state. This flexibility allows us to explore alternative future paths from arbitrary points
428 in a generation. In contrast, standard RL typically collect training data through sequential rollouts, a
429 process reflected in the design of the Gym (Towers et al., 2024), the de facto RL environment API.
430 Gym environments provide two primary functions: (1) env.reset(), which resets the environment
431 to its initial state, and (2) env.step(action), which advances the environment based on the agent’s
432 action. There is no mechanism for resetting to an arbitrary intermediate state within a trajectory. This
433 design suits classic RL, where the focus is on black-box game engines or real-world deployment.
434 Moreover, recent advancements in LLM inference engines (Kwon et al., 2023; Zheng et al., 2024)
435 have dramatically increased the speed of on-the-fly response generation—for example, an LLM with
436 7B parameters can generate up to 5,000 tokens per second on a single GPU[7]. This computational
437 efficiency makes it feasible to conduct fast environment simulation, opening up unique opportunities
438 for RL training of LLMs.
439 **E** **Experimental Setup**
440 **Datasets and Pretrained LLMs** We conduct our experiments using strong LLMs specifically
441 pretrained for mathematical reasoning: (1) DeepSeekMath 7B (Shao et al., 2024) and (2) RhoMath
442 1.1B (Lin et al., 2024), both of which have been trained on diverse mathematical and natural language
443 corpora. Having models from different sizes allows for evaluating the effect of scaling. We focus
444 on mathematical reasoning datasets MATH (Hendrycks et al., 2021), which consists of competition
445 level mathematical problems and present a range of difficulty levels that allow for comprehensive
446 evaluation of reasoning abilities. To ensure our setup is reproducible, we only make use of publicly
447 available data and checkpoints on Huggingface. For each dataset, we finetune the base LLMs on
7Nvidia A100 GPU with model loaded in 16bit precision.
-----
RhoMath 1.1B DeepSeekMath 7B
) 24% 44%
20% 40%
Accuracy ( 16% 36%
12% 32%
0 200 400 600 800 1000 0 200 400 600 800 1000
Training Step
Method VinePPO PPO
Figure F.1: Comparison of the training behavior between VinePPO and PPO. VinePPO demonstrates
consistently higher accuracy (as measured on the test set of MATH dataset) throughout the training.
448 their respective training sets to obtain the initial SFT models (π0). In all experiments, we employ
449 _full-parameter finetuning to allow utilization of models’ full capacity (Biderman et al., 2024; Sun_
450 et al., 2023).
451 **Evaluation** We evaluate model performance on the test sets of each dataset, using accuracy
452 (Pass@1) as our primary metric, which measures the correctness of the final answers produced by the
453 models. As our baseline, we adopt the standard PPO framework, as commonly implemented for LLM
454 finetuning (Ouyang et al., 2022; Touvron et al., 2023; Huang et al., 2024). Additionally, we compare
455 our proposed method against RestEM (Singh et al., 2023), which applies Expert Iteration, a form
456 of Iterative Rejection Finetuning (Yuan et al., 2023; Anthony et al., 2017) with measures to prevent
457 overfitting. All methods are initialized from the same SFT checkpoint π0 to ensure a fair comparison.
458 **Hyperparameters and Training Details** To ensure standard PPO (and its value network) has a
459 healthy training and our evaluation reflects its full potential, we first focus our hyperparameter search
460 on PPO parameters (such as KL penalty coefficient, batch size, minibatch size, GAE λ, number of
461 epochs per iteration) and apply all well-known techniques and best practices (Huang et al., 2024;
462 Ivison et al., 2024) in PPO tuning (Refer to Appendix K.2 for the full list). VinePPO borrows
463 the exact same hyperparameters from PPO and only modifies the advantage A(st, at) estimation,
464 keeping the rest of the pipeline unchanged. This allows us to isolate the effect of accurate credit
465 assignment. We found that sampling K = 9 trajectories in _V[ˆ]MC performs well; the effect of varying_
466 _K is fully analyzed in Fig. 4. For the other baseline, we closely follow the original setup while_
467 ensuring consistency in training conditions for a fair comparison. We choose the best checkpoint
468 based on a held-out validation set for all experiments. Full implementation details, including all
469 hyperparameters and training procedures, are provided in Appendix K.5.
470 **F** **Training Plots**
-----
RhoMath 1.1B DeepSeekMath 7B
) 22.5%
44%
20.0%
40%
17.5%
Accuracy (
36%
15.0%
10 20 30 4 8 12 16
KL( 0[)]
Method
VinePPO PPO
Figure H.3: Comparing task accuracy and KL divergence during training on the MATH dataset.
VinePPO consistently achieves higher accuracy at similar KL levels, reflecting its more efficient
credit assignment and focused updates.
471 **G** **Temperature Tolerance**
472 Sampling temperature is a critical hyperparameter that con
473 trols the randomness of trajectories generated by the model.
474 At higher temperatures, the model generates more diverse tra
475 jectories, encouraging exploration that can accelerate training,
476 especially during the early stages. However, increased diversity
477 in the trajectories also presents a challenge: the value network in
478 PPO must generalize over a wider range of states, complicating
479 value estimation. To evaluate the effect of temperature on per
480 formance, we compared VinePPO and PPO runs using different
481 temperatures T ∈{0.6, 0.8, 1.0} over 360 training steps, anal
482 ysed their training dynamics. As shown in Fig. G.2, VinePPO
483 consistently benefits from higher temperatures, achieving faster
484 convergence and higher accuracy. In contrast, PPO not only
485 fails to benefit from increased temperature, but also diverges
486 when the temperature is set to its highest value, T = 1.0, where
487 the trajectories are most diverse.
488 These findings raise concerns about the scalability of PPO,
489 particularly in real-world scenarios involving large and diverse
490 datasets, in contrast to VinePPO which maintains robust value
491 estimation regardless of the diversity in the trajectories.
492 **H** **KL Divergence**
|45% ) 40% ( Accuracy 35% Initial SFT 30% 0 100 200 300 Training Step Method VinePPO PPO Temparature 0.6 0.8 1.0|Col2|Col3|Col4|Col5|Col6|
|---|---|---|---|---|---|
|||||||
|||||||
|||||In|itial SFT|
|||||||
|||||||
Figure G.2: Test set accuracy during training with higher temperature presented for DeepSeekMath
7B and MATH dataset. VinePPO
can tolerate higher temperatures.
493 **I** **Value Prediction Analysis**
494 Both PPO and VinePPO estimate values as means to credit assignment, one employing a value
495 network and the other using MC samples. More accurate value estimates lead to more precise
496 advantage computations, resulting in more effective policy updates. As shown in Section 4, VinePPO
497 consistently outperforms PPO. In this section, we explore the underlying reasons for this performance
498 gap by closely analyzing the value prediction of both methods. To assess the accuracy of value
499 predictions, we first establish a “ground truth” value for each state within trajectories, denoted as
500 _Vˆ_ (st), by running multiple MC rollouts (256 in our case) and averaging the returns. This provides a
_[∗]_
501 low-variance reference value. We then compare the value predictions in both methods against this
502 ground truth on the DeepSeekMath 7B and MATH datasets.
-----
Step: 60 Step: 360 Step: 540 Step: 960
1.00
0.75
0.50 PPO
0.25
0.00
1.00
Predicted Value
0.75
0.50
0.25 VinePPO
0.00
0.00 0.25 0.50 0.75 1.00 0.00 0.25 0.50 0.75 1.00 0.00 0.25 0.50 0.75 1.00 0.00 0.25 0.50 0.75 1.00
Ground Truth Value
Figure I.4: Distribution of predicted values for each state vs. ground truth (computed using 256 MC
samples) during training for DeepSeekMath 7B on MATH dataset, highlighting the nature of errors.
VinePPO achieves much lower Mean Absolute Error (MAE).
PPO @ Step 420 PPO @ Step 960 VinePPO @ Step 420 VinePPO @ Step 960
0.100
0.15 0.004
0.075 0.0075 0.003
0.10 0.0050
0.050 0.002
MSE
0.025 0.05 0.0025 0.001
0.0000 0.000
0.000 0.00
0.00 0.25 0.50 0.75 1.00 0.00 0.25 0.50 0.75 1.00 0.00 0.25 0.50 0.75 1.00 0.00 0.25 0.50 0.75 1.00
Normalized Reasoning Step
Figure I.5: Visualizing the Mean Absolute Error (MAE) of the value predictions in different point of
reasoning chain. Value Network in PPO fails to generalize as the reasoning chain progresses, while
VinePPO’s value estimates become more accurate as the model become more deterministic.
503 **Accuracy** Fig. I.4 presents the distribution of value predictions during training. The errors produced
504 by VinePPO and PPO differ significantly in their nature. VinePPO’s estimates are unbiased, with
505 variance peaking at 0.5 and dropping to zero at 0 and 1. In contrast, the value network used in
506 PPO exhibits high bias, often misclassifying bad states (V[ˆ] (st) = 0) as good and vice versa. To
_[∗]_
507 further visualize accuracy, we consider a value prediction as “correct” if it falls within 0.05 of the
508 ground truth. The accuracy of this classification formulation is shown in Figure Fig. I.6. The value
509 network starts with low accuracy, improving gradually to a peak of 65%. In contrast, VinePPO
510 consistently achieves an accuracy of 70-90% throughout the training process, pointing to its more
511 reliable approach.
512 **Error Per Reasoning Step** To gain insights into the mechanisms behind value prediction, we
513 analyze the prediction error at each reasoning step within a trajectory. As illustrated in Fig. I.5, PPO’s
514 value estimation error tends to increase as the reasoning chain progresses. We hypothesize this is
515 because, at earlier steps, partial trajectories more closely resemble the training data, allowing the
516 value network to rely on memorization. However, as reasoning progresses and the states become
517 unfamiliar, the value network needs to generalize, where it tends to fail. In contrast, VinePPO exhibits
518 the opposite trend: its value prediction error decreases as reasoning advances. We attribute this to the
519 increasing determinism of later reasoning steps, which conditions on prior actions. This determinism
520 allows the same number MC sample to provide more accurate estimates.
521 **J** **Reviewing PPO**
522 PPO, as used in RL tuning of LLMs, formulates language generation as token-level MDP (Ap
523 pendix C), where each response y is an episode. The state at time step t, st, is the concatenation
524 of the prompt and the tokens generated so far: st = x; y<t = [x0, . . ., x ∈SM 1, y0, . . ., yt 1]. The
_−_ _−_
-----
|Binning Accuracy 100% 80% 60% Binning Accuracy Accuracy 100% 40% 80% Accuracy 2600%% 40% 0% 20% 250 500 750 1000 250 500Step 750 1000|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|
|---|---|---|---|---|---|---|---|---|---|---|
||||||||||||
||||||||||||
||||||||||||
||||||||||||
||60% Binning Accuracy Accuracy 100% 40% 80% Accuracy 2600%% 40% 0% 20% 250 500 750 250 500Step 750 1000|||Binning Accu||racy|||||
||||||||||||
||||||||||||
||||||||||||
||||||||||||
||||250||500||750||||
||Step VinePPO PPO||||||||||
Figure I.6: Value prediction accuracy formulated as a classification problem, where a prediction is
considered correct if it falls within 0.05 of the ground truth.
525 action at corresponds to generating the next token yt from the model’s vocabulary. Given a prompt
526 **_x, an episode of this MDP starts from the initial state s0 = x, and with each action taken, the_**
527 environment moves to a subsequent state, st+1 = st; [at], by adding the action at to the existing
528 state st. In the language environment, because states are always formed by concatenating tokens,
529 the environment dynamics are fully known, and the transition function is deterministic, meaning
530 _P_ (st+1|st, at) = 1. During the generation process, the reward rt is set to zero for all intermediate
531 actions at’s, with the sequence-level reward (x; y) only applied at the final step when the model
_R_
532 stops generating. Throughout the generation process, the reward rt is set to zero for all intermediate
533 actions at, with the sequence-level reward (x; y) applied only at the final step when the model
_R_
534 stops the generation. That is:
(x; y) if t = T 1, where st+1 = y is terminal,
_rt = r(st, at) =_ _R_ _−_ (7)
0 otherwise.
535 A trajectory τ = (s0, a0, s1, a1, . . . ) thus represents a sequence of state-action pairs that begins at
536 the input prompt and continues until reaching the terminal state. Finally, the cumulative return of a
537 trajectory τ is defined as R(τ ) = _t=0_ _[r][t][ =][ r][T][ −][1][ =][ R][(][x][;][ y][)][.]_
538 The goal of RL tuning is to maximize the expected return of the model’s responses to prompts in the
539 dataset, as defined by the reward function[P][T][ −][1] _R (Eq. 3). PPO, similar to other policy gradient methods,_
540 achieves this goal by repeatedly sampling trajectories for a batch of prompt sampled from D and
541 taking multiple optimization steps in the direction of the gradient gppo to update the parameters. PPO
542 gradient gppo is defined as the gradient of the following loss:
_T −1_
min
" _t=0_
X
_−1_ _πθ(at_ _st)_
_Lppo(θ) = Eτ_ _∼πθk_ " _t=0_ min _πθk_ (at | _st)_ _[A]t[θ][k]_ _[,][ clip(][θ][)][A]t[θ][k]_ ! _−_ _β KL[πθ ∥_ _π0]#_ (8)
X _|_
543 where πθk is the policy at the previous iteration, ϵ is the clipping parameter, β is the KL penalty
544 coefficient, A[θ]t _[k]_ = A[θ][k] (st, at) is the advantage estimate for policy πθk, and the clip(θ) function is:
_πθ(at_ _st)_
clip(θ) = clip _|_ _._ (9)
_πθk_ (at _st)_ _[,][ 1][ −]_ _[ϵ,][ 1 +][ ϵ]_
_|_
545 Note that the KL penalty could be also added to the reward function R. We follow the more recent
546 implementations (Shao et al., 2024; Qwen, 2024), where it is added to the loss function. The KL term
547 can be computed using the following unbiased estimator (Schulman, 2020):
KL(ˆ _θ) =_ _[π][0][(][a][t][ |][ s][t][)]_ (10)
_πθ(at_ _st)_ _πθ(at_ _st)_
_|_ _[−]_ [log][ π][0][(][a][t][ |] |[ s][t][)] _[−]_ [1][.]
548 where π0 denotes the reference model (initial SFT).
_Lppo(θ) = Eτ_ _∼πθk_
-----
549 **J.1** **Value Network**
550 In addition to the policy πθ, PPO also trains a separate value network _V[ˆ]ϕ to obtain an estimate the_
551 true values V (st) of states st. Parameterized by ϕ, the value network is trained alongside the policy
552 network πθ using the following loss:
_T_ 1
1 _−_ 2 2[#]
ValNet(ϕ) = [1] max _V[ˆ]ϕ(st)_ _Gt_ _,_ clip(ϕ) _Gt_ (11)
_L_ 2 [E][τ] _[∼][π][θ]_ " _T_ _t=0_ _−_ _−_
X
553 where _V[ˆ]ϕk is the value network at the previous iteration, Gt =_ _t[′]=t_ _[γ][t][′][−][t][r][t][′][ is the empirical return]_
554 from state st, ϵ[′] is a value clipping parameter, and the clip(θ) is defined as:
clip(ϕ) = clip _Vˆϕ(st),_ _V[ˆ]ϕk_ (st) _ϵ[′],_ _V[ˆ][P]ϕk[T]([ −]st[1]) + ϵ[′][]_ _._ (12)
_−_
555 In RL-tuning of LLMs, the value network is typically initialized from the initial policy _π0 (or the_
556 reward model, if available), replacing the language modeling head with a scalar output head to predict
557 values (Zheng et al., 2023) This approach takes advantage of the base model’s prior knowledge for
558 value estimation.
559 **Advantage Estimation** Once the estimated values _V[ˆ]ϕ(st) are obtained, the advantages A(st, at)_
560 are computed using the GAE (Schulman et al., 2016):
_A(st, at)_ _A[GAE](st, at)_ (13)
_≈_ [ˆ]
= (1 _λ)_ _Aˆ[(1)]t_ + λA[ˆ][(2)]t + λ[2][ ˆ]A[(3)]t + . . . (14)
_−_
= (γλ)[l]δt+l (15)
_l=0_
X
_∞_
= (γλ)[l][ ]rt+l + γV[ˆ]ϕ(st+l+1) − _V[ˆ]ϕ(st+l)_ (16)
_l=0_
X
561 where δt = rt + γV[ˆ]ϕ(st+1) − _V[ˆ]ϕ(st) is the temporal difference error, λ is the GAE parameter, and γ_
562 is the discount factor. Also, we have:
_k−1_
_γ[l]δt+l = rt + γrt+1 +_ + γ[k][−][1]rt+k 1 + γ[k][ ˆ]Vϕ(st+k) _Vϕ(st)._ (17)
_· · ·_ _−_ _−_ [ˆ]
_l=0_
X
_Aˆ[(]t[k][)]_ :=
563 Adjusting the GAE parameter λ allows for a trade-off between bias and variance in the advantage
564 estimates. However, as we discuss in Appendix K.5, we found that λ = 1 works best in our
565 experiments (similar to the findings of Luong et al. (2024) and Ahmadian et al. (2024)). In this case,
566 the GAE simplifies to the following form (assuming γ = 1): _A[ˆ][GAE](st, at) =_ _t[′]=t_ _[r][t][′][ −]_ _V[ˆ]ϕ(st)._
567 **K** **Experimental Details** [P][T][ −][1]
568 **K.1** **Datasets**
569 We focus on mathematical reasoning datasets that require step-by-step solutions and are widely used
570 to evaluate the reasoning capabilities of LLMs. Below is a brief overview of the datasets used in our
571 experiments:
572 **MATH (Hendrycks et al., 2021)** The MATH dataset contains problems from high school math
573 competitions, covering a wide range of topics such as algebra, geometry, and probability. For our
574 experiments, we use the OpenAI split provided by Lightman et al. (2023b), which consists of 500
575 problems for testing and 12,500 problems for training. We further divide the training set into 11,500
576 problems for training and 500 problems for validation. Each problem includes a step-by-step solution,
577 ending in a final answer marked by \boxed{} in the solution (e.g., “..so the smallest possible value
578 _of c is π ”). This marking allows for verification of the correctness of model-generated responses_
579 by comparing the final answer to the ground truth. We use the scripts provided by Lewkowycz et al.
580 (2022), Lightman et al. (2023b), and Shao et al. (2024) to extract and compare the final answers to
581 the ground truth.
-----
Table 1: Summary of PPO hyperparamters used in the experiments.
Parameter Value
TRAINING
Optimizer AdamW
Adam Parameters (β1, β2) (0.9, 0.999)
Learning rate 1 × 10[−][6]
Weight Decay 0.0
Max Global Gradient Norm for Clipping 1.0
Learning Rate Scheduler Polynomial
Warm Up 3% of training steps
# Train Steps For MATH dataset 1000 steps (around 8 dataset epochs)
GENERAL
Maximum Response Length 1024 tokens
Maximum Sequence Length for RhoMath 1.1B 2048 tokens
Maximum Sequence Length for DeepSeekMath 7B 2500 tokens
PPO
# Responses per Prompt 8 Search Space: {8, 16, 32}
# Episodes per PPO Step 512 Search Space: {256, 512}
# Prompts per PPO Step 512/8 = 64
Mini-batch Size 64
# Inner epochs per PPO Step 2 Search Space: {1, 2}
Sampling Temperature 0.6 Search Space: {0.6, 0.8, 1.0}
Discount Factor γ 1.0
GAE Parameter λ 1.0 Search Space: [0.95 − 1.0]
KL Penalty Coefficient β 1e-4 Search Space: {1e-1, 1e-2, 3e-3, 1e-4}
Policy Clipping Parameter ϵ 0.2
Value Clipping Parameter ϵ[′] 0.2
Table 2: Summary of RestEM hyperparamters used in the experiments.
Parameter Value
TRAINING
Optimizer AdamW
Adam Parameters (β1, β2) (0.9, 0.999)
Learning rate 1 × 10[−][6]
Weight Decay 0.0
Max Global Gradient Norm for Clipping 1.0
Learning Rate Scheduler Polynomial
Warm Up 3% of training steps
RESTEM
# iterations 10
# Sampled Responses per Prompt 8 Search Space: {8, 32}
Sampling Temperature 0.6 Search Space: {0.6, 0.8, 1.0}
Checkpoints every # iteration 500 step
Checkpoint Selection until validation improves
Search Space: {until validation improves, best validation}
582 **K.2** **PPO Implementation**
583 To ensure our PPO implementation is robust, and our evaluation reflects its full potential, we have
584 applied a set of well-established techniques and best practices from the literature (Huang et al., 2024;
585 Ivison et al., 2024; Zheng et al., 2023). Below, we outline the key implementation details that were
586 most effective in our experiments:
-----
587 - Advantage Normalization: After calculating the advantages, we normalize them to have
588 zero mean and unit variance, not only across the batch but also across data parallel ranks. This
589 normalization step is applied consistently in both our PPO and VinePPO implementations.
590 - Reward Normalization: We follow Ivison et al. (2024) and do not normalize the rewards, as
591 the reward structure in our task is already well-defined within the range of [0, 1]. Specifically,
592 correct responses are assigned a reward of 1, while incorrect responses receive 0.
593 - End-of-Sequence (EOS) Trick: As detailed in Appendix J, rewards are only applied at
594 the final token of a response, which corresponds to the EOS token when the response is
595 complete. For responses that exceed the maximum length, we truncate the response to the
596 maximum length and apply the reward to the last token of the truncated sequence. We also
597 experimented with penalizing truncated responses by assigning a negative reward (-1), but
598 this did not lead to performance improvements.
599 - Dropout Disabling: During the RL tuning phase, we disable dropout across all models.
600 This ensures that the log probabilities remain consistent between different forward passes,
601 thereby avoiding stochastic effects that could hurt training stability.
602 - Fixed KL Coefficient We use a constant coefficient for the KL penalty. Although the
603 original PPO implementation for finetining language models (Ziegler et al., 2019b) utilized
604 an adaptive KL controller, more recent implementations typically do not use this approach
605 (Ouyang et al., 2022; Touvron et al., 2023; Xu et al., 2024).
606 **K.3** **SFT Models**
607 To ensure a systematic and reproducible evaluation, we create our SFT models πref by finetuning the
608 _base pretrained LLMs (as opposed to their “Instruct” version) on the training splits of the respective_
609 datasets. Specifically, we produce two distinct SFT models: two base LLM (DeepSeekMath 7B and
610 RhoMath 1.1B ) across MATH. The base models are finetuned using the Adam optimizer without
611 weight decay. We employ a learning rate warm-up over 6% of the total training steps. Each model
612 is trained for three epochs with a batch size of 64, and the best checkpoint is selected based on
613 validation accuracy. For each SFT model, we conduct a hyperparameter sweep over learning rates in
614 the range {1 × 10[−][7], 3 × 10[−][7], 1 × 10[−][6], 3 × 10[−][6], 1 × 10[−][5], 3 × 10[−][5], 8 × 10[−][5], 1 × 10[−][4]} to
615 ensure optimal performance. We then use these SFT models as the initial checkpoint for training the
616 methods mentioned in our paper.
617 **K.4** **Evaluation**
618 We evaluate each method’s performance on the test sets of each dataset. For example, when we
619 report that PPO achieves 42.8% accuracy on the MATH dataset for the DeepSeekMath 7B model,
620 this means the PPO training was initialized with the SFT model specific to DeepSeekMath 7B on the
621 MATH dataset, and accuracy was measured on the MATH test set. Our primary evaluation metric is
622 accuracy, specifically Pass@1, which reflects the percentage of correct responses on the first attempt.
623 This metric is crucial because it represents a realistic user interaction, where the model is expected to
624 deliver a correct answer without the need for multiple tries. For each evaluation, we sample a response
625 from the model for a given prompt, using a maximum token length of 1024 and a temperature of
626 0.35. A response is considered correct if its final answer matches the ground truth final answer, as
627 detailed in Appendix K.1. Furthermore, each accuracy score is averaged over 16 evaluation rounds,
628 each conducted with different random seeds. This will ensure a robust and low variance assessment
629 of model performance.
630 **K.5** **Hyperparameters**
631 In this section, we present a comprehensive overview of the hyperparameters used in our experiments.
632 **PPO** Finetuning LLMs using PPO is known to be sensitive to hyperparameter selection, and finding
633 the optimal settings is critical for achieving strong performance. To ensure the robustness of our
634 study, we explored hyperparameter values reported in recent studies (Shao et al., 2024; Zheng et al.,
635 2023; Ivison et al., 2024; Huang et al., 2024) and conducted various sweeps across a wide range of
636 values to identify the best configuration for our tasks and models. The full set of hyperparameters,
637 along with their respective search spaces, is detailed in Table 1.
-----
Table 3: Average time spent per each training step for different methods and models measured for
MATH dataset
Method Model Hardware Average Training Step Time (s)
PPO RhoMath 1.1B 4 × Nvidia A100 80GB 80
VinePPO RhoMath 1.1B 4 × Nvidia A100 80GB 380
PPO DeepSeekMath 7B 8 × Nvidia H100 80GB 312
VinePPO DeepSeekMath 7B 8 × Nvidia H100 80GB 583
638 **VinePPO** We utilized the same hyperparameter setup as in the PPO implementation (Table 1) for
639 VinePPO. The number of MC samples, K, was set to 9 for all experiments.
640 **RestEM** To ensure fair comparison we equalize the number of sampled responses for training
641 between our RestEM run and our PPO runs. Therefore, in each RestEM iteration we sample 8
642 responses per prompt and train for 8 epochs on the correct responses. In order to boost RestEM’s
643 performance we also run a sweep on other sensible parameters but we noticed no improvement
644 (Table 2).
645 **K.6** **Compute**
646 All experiments were conducted using multi-GPU training to efficiently handle the computational
647 demands of large-scale models. For the RhoMath 1.1B model, we utilized a node with 4 × Nvidia
648 A100 80GB GPUs to train both PPO and VinePPO. For the larger DeepSeekMath 7B model, we
649 employed a more powerful setup, using a node with 8 × Nvidia H100 80GB GPUs. Additionally,
650 for training DeepSeekMath 7B models with the RestEM approach, we used a node with 4 × Nvidia
651 A100 80GB GPUs. The average training step time for each method on the MATH dataset is presented
652 in Table 3.
653 **K.7** **Software Stack**
654 Both PPO and VinePPO require a robust and efficient implementation. For model implementation,
655 we utilize the Huggingface library. Training is carried out using the DeepSpeed distributed training
656 library, which offers efficient multi-GPU support. Specifically, we employ DeepSpeed ZeRO stage 0
657 (vanilla data parallelism) for RhoMath 1.1B and ZeRO stage 2 (shared optimizer states and gradients
658 across GPUs) for DeepSeekMath 7B . For trajectory sampling during RL training, we rely on the
659 vLLM library (Kwon et al., 2023), which provides optimized inference for LLMs. Additionally,
660 VinePPO leverages vLLM to generate Monte Carlo samples for value estimation. This software
661 stack ensures that our experiments are both efficient and reproducible. For instance, during VinePPO
662 training, we achieve an inference speed of up to 30K tokens per second using 8 × Nvidia H100 GPUs
663 with the DeepSeekMath 7B model.
664 **K.8** **Reproducibility**
665 In this study, all experiments were conducted using open-source libraries, publicly available datasets,
666 and open-weight LLMs. To ensure full reproducibility, we will release both Singularity and Docker
667 containers, pre-configured with all dependencies and libraries, enabling our experiments to be run on
668 any machine equipped with NVIDIA GPUs, now or in the future. Additionally, we will make our
669 [codebase publicly available on GitHub at https://www.omitted.link.](https://www.omitted.link)
-----
| [
"Alessandro, Sordoni",
"Amirhossein, Kazemnejad",
"Milad, Aghajohari",
"Siva, Reddy",
"Eva, Portelance",
"Aaron, Courville",
"Nicolas Le, Roux"
] | 2024-10-10T00:00:00 | NeurIPS 2024 Workshop MATH-AI | false | 0 | 0 | null | https://openreview.net/forum?id=KqALqWJSbF | null | https://www.semanticscholar.org/paper/5652ec62eb6aee72e09f6c38704b3187972a0589 |
VinePPO: Unlocking RL Potential For LLM Reasoning Through Refined Credit Assignment | Large language models (LLMs) are increasingly applied to complex reasoning tasks that require executing several complex steps before receiving any reward. Properly assigning credit to these steps is essential for enhancing model performance. Proximal Policy Optimization (PPO), a state-of-the-art reinforcement learning (RL) algorithm used for LLM finetuning, employs value networks to tackle credit assignment. However, value networks face challenges in predicting the expected cumulative rewards accurately in complex reasoning tasks, often leading to high-variance updates and suboptimal performance. In this work, we systematically evaluate the efficacy of value networks and reveal their significant shortcomings in reasoning-heavy LLM tasks, showing that they barely outperform a random baseline when comparing alternative steps. To address this, we propose VinePPO, a straightforward approach that leverages the flexibility of language environments to compute unbiased Monte Carlo-based estimates, bypassing the need for large value networks. Our method consistently outperforms PPO and other RL-free baselines across MATH and GSM8K datasets with fewer gradient updates (up to 9x), less wall-clock time (up to 3.0x). These results emphasize the importance of accurate credit assignment in RL finetuning of LLM and demonstrate VinePPO's potential as a superior alternative. | VinePPO is proposed, a straightforward approach that leverages the flexibility of language environments to compute unbiased Monte Carlo-based estimates, bypassing the need for large value networks, and consistently outperforms PPO and other RL-free baselines across MATH and GSM8K datasets. | ## VINEPPO: UNLOCKING RL POTENTIAL FOR LLM REASONING THROUGH REFINED CREDIT ASSIGN### MENT
**Amirhossein Kazemnejad[∗]** [1], Milad Aghajohari[∗] [1], Eva Portelance[1][,][6],
**Alessandro Sordoni[1][,][2], Siva Reddy[1][,][3][,][4], Aaron Courville[†][ 1][,][4][,][5], Nicolas Le Roux[†][ 1][,][4]**
1Mila 2Microsoft Research 3McGill University
4Canada CIFAR AI Chair 5Universit´e de Montr´eal 6HEC Montr´eal
_{amirhossein.kazemnejad,aghajohm}@mila.quebec_
ABSTRACT
Large language models (LLMs) are increasingly applied to complex reasoning
tasks that require executing several complex steps before receiving any reward.
Properly assigning credit to these steps is essential for enhancing model performance. Proximal Policy Optimization (PPO), a state-of-the-art reinforcement
learning (RL) algorithm used for LLM finetuning, employs value networks to
tackle credit assignment. However, value networks face challenges in predicting the expected cumulative rewards accurately in complex reasoning tasks, often
leading to high-variance updates and suboptimal performance. In this work, we
systematically evaluate the efficacy of value networks and reveal their significant
shortcomings in reasoning-heavy LLM tasks, showing that they barely outperform
a random baseline when comparing alternative steps. To address this, we propose
VinePPO, a straightforward approach that leverages the flexibility of language
environments to compute unbiased Monte Carlo-based estimates, bypassing the
need for large value networks. Our method consistently outperforms PPO and
other RL-free baselines across MATH and GSM8K datasets with fewer gradient
updates (up to 9x), less wall-clock time (up to 3.0x). These results emphasize the
importance of accurate credit assignment in RL finetuning of LLM and demonstrate VinePPO’s potential as a superior alternative[1].
1 INTRODUCTION
Large language models (LLMs) are increasingly used for tasks requiring complex reasoning, such as
solving mathematical problems (OpenAI, 2024), navigating the web (Zhou et al., 2024), or editing
large codebases (Jimenez et al., 2024). In these settings, LLMs often engage in extended reasoning steps, executing multiple actions to arrive at a solution. However, not all steps are equally
important—some contribute significantly, while others are irrelevant or detrimental. For example,
in Figure 1.a, only step s2 provides a key insight. Indeed, most reasoning steps generated by a
model do not affect the chance of it solving the problem (Figure 1.b). Identifying the contribution
of each action is crucial for improving model performance. However, this is inherently difficult due
to the significant delay between actions and their eventual effect. This issue, known as the credit
_assignment problem, is a core challenge in reinforcement learning (RL, Sutton and Barto 1998)._
Proximal Policy Optimization (PPO, Schulman et al. 2017; Ouyang et al. 2022), a state-of-the-art
algorithm for RL-based finetuning of LLMs (Xu et al., 2024; Ivison et al., 2024), tackles credit
assignment using a value network (or critic). This network, typically a separate model initialized
from a pretrained checkpoint, is trained during PPO finetuning to estimate the expected cumulative
rewards (or value) of an intermediate action. In Figure 1.b, an ideal value network would assign
high value to step s2 and subsequent steps, where the model predicted a critical action. PPO uses
these value estimates to measure the advantage of each action and update the model accordingly.
_∗Equal contribution._ _† Equal advising._
[1Code available at https://github.com/McGill-NLP/VinePPO](https://github.com/McGill-NLP/VinePPO)
-----
Prompt (s0) _pˆ(correct|s:t)_
Let a and b be nonzero real numbers such that
(2 _−_ 7i)(a + _bi) is pure imaginary. Find_ _[a]b_ _[.]_ 0.4
Response
_s1_ We can expand the left-hand side to get 0.4
_s2_ (2 − 7i)(a + bi) = (2a + 7b) + (−7a + 2b)i. **1.0**
_s3_ This is pure imaginary if and only if the real part is 0, i.e. 1.0
_s4_ 2a + 7b = 0. 1.0
_s5_ Then a = − 2[7] _[b,][ so][ a]b_ [=][ −] 2[7] _[.]_ 1.0
A Sample Response
60%
40%
20%
DeepSeekMath 7B
0%
60%
40%
20% RhoMath 1.1B
0%
-1.0 -0.5 0.0 0.5 1.0
_pˆ(correct|s:t+1) −_ _pˆ(correct|s:t)_
Figure 1: (Left) A response generated by the model. The notation ˆp(correct|s:t) represents the
estimated probability of successfully solving the problem at step t. Here, only step s2 is critical;
after this, the model completes the solution correctly. (Right) The delta in probability of successful
completion between response steps. Most steps show little or no advantage over the preceding step.
Accurately modeling value—predicting future rewards from an incomplete response—requires the
value network to understand both the space of correct solutions (the very task the policy model is
trying to learn) and predict the model’s future behavior, both of which are inherently challenging. In
fact, there are hints in the literature that standard PPO implementations for LLMs have inaccurate
value estimations. Ahmadian et al. (2024) and Trung et al. (2024) find that value networks often
serve best as just a baseline in policy gradient[2]. Shao et al. (2024) show that the value network can
be replaced by averaging rewards of responses to a given problem without degradation in performance. Since errors in value estimation can lead to poor credit assignment and negatively impact
convergence and performance (Greensmith et al., 2001), a natural question to ask is: how accurately
_do value networks actually perform during LLM finetuning? If we could improve credit assignment,_
to what extent would it enhance LLM performance? While recent studies (Hwang et al., 2024; Setlur
et al., 2024) have begun to highlight the importance of identifying incorrect reasoning steps and incorporating them via ad-hoc mechanisms in “RL-free” methods (Rafailov et al., 2023), the broader
question of how improving credit assignment might boost RL fine-tuning for LLMs remains open.
In this work, we evaluate the standard PPO pipeline in mathematical reasoning tasks across various
model sizes. We find that value networks consistently provide inaccurate estimates and struggle to
rank alternative steps correctly, suggesting that current PPO finetuning approaches for LLMs operate
without effective credit assignment. To address this issue and illustrate the effect of accurate credit
assignment, we propose VinePPO (Figure 2). Instead of relying on value networks, VinePPO computes unbiased value estimates of intermediate states by using independent Monte Carlo (MC) samples and averaging their respective return. This straightforward modification to PPO takes advantage
of a special property of the language environment: the ability to easily reset to any intermediate state
along the trajectory.
VinePPO consistently outperforms standard PPO and “RL-free” baselines, especially on more challenging datasets. Despite its slower per-iteration speed, it reaches and surpasses PPO’s peak performance with fewer gradient updates, resulting in less wall-clock time and lower KL divergence from
the base model. Our findings highlight the importance of precise credit assignment in LLM finetuning and establishes VinePPO as a straightforward alternative to value network-based approaches.
Our contributions are as follows:
- We demonstrate the suboptimal credit assignment in standard PPO finetuning by analyzing the
value network, showing that it provides inaccurate estimates of intermediate state values and
barely outperforms a random baseline when ranking alternative steps (see Section 7 for details).
2setting the Generalized Advantage Estimation (GAE, Schulman et al. 2016) parameter λ = 1
-----
Value Prediction
in PPO
Value Prediction
in VinePPO
0.42
0.33
_yt−1_
_yt_
_yt−1_
_yt_
**(b)**
**(a)**
_Vˆϕ(x; y<t) =_
ValNet(x; y<t)
_VˆMC(x; y<t) =_
1/K _k_ _[R][(][τ][ ′]k[)]_
[P]
_τ1[′]_ _τ2[′]_ _[. . .]_ _τK[′]_
✓ _×_ ✓
Figure 2: (a) PPO finetunes the model by adjusting action probabilities based on their advantage,
which is primarily guided by the value network’s value estimates. (b) VinePPO modifies standard
PPO and obtains values estimates by simply resetting to intermediate states and using MC samples.
- We propose VinePPO, introduced in Section 4, which takes advantage of the flexibility of language as an RL environment to compute unbiased value estimates, eliminating the need for large
value networks and reducing memory requirements (up to 112GB for a 7B LLM).
- VinePPO highlights the significance of credit assignment: It outperforms PPO and other baselines,
especially on more challenging datasets. It achieves PPO’s peak performance with fewer iterations
(up to 9x), less wall-clock time (up to 3.0x), and better KL-divergence trade-off. See Section 6.
2 RELATED WORK
**Credit Assignment in Post-Training of LLM** PPO, as applied in RL from Human Feedback
(RLHF, Ouyang et al. 2022), pioneered RL finetuning of LLMs. However, its computational overhead and hyperparameter sensitivity led to the development of simpler alternatives. RL-free methods
such as DPO (Rafailov et al., 2023) operate in a bandit setting, treating the entire response as a single
action. Similarly, rejection sampling methods like RestEM (Singh et al., 2024) finetune on full highreward responses. RLOO (Ahmadian et al., 2024) and GRPO (Shao et al., 2024) abandon PPO’s
value network, instead using average reward from multiple samples as a baseline. Recent work has
emphasized finer credit assignment, with Hwang et al. (2024) and Setlur et al. (2024) introducing
MC-based methods to detect key errors in reasoning chains for use as ad-hoc mechanisms in DPO.
Our work, by contrast, fully embraces the RL training, with the target of unlocking PPO’s potential. Parallel efforts have also focused on building better verifiers and reward models for per-step
feedback, with recent attempts to automate their data collection using MC rollouts (Ma et al., 2023;
Uesato et al., 2022; Luo et al., 2024; Wang et al., 2024). Our method is orthogonal to these methods,
operating within PPO-based training to optimize a given reward, instead of designing new ones.
**Value Estimation in RL and Monte Carlo Tree Search (MCTS)** Deep RL algorithms are typically categorized into value-based and policy-based methods. Policy-based methods like PPO usually employ critic networks for value prediction. An exception is the “Vine” variant of TRPO
(Schulman et al., 2015), which uses MC samples for state value estimation. The authors, however,
note that the Vine variant is limited to environments that allow intermediate state resets, rare in
typical RL settings[3]. However, language generation – when formulated as RL environment – enables such intermediate reset capabilities. In domains with similar reset capabilities, such as Go and
Chess, MC-heavy methods like AlphaGo (Silver et al., 2016) and AlphaZero (Silver et al., 2017)
have emerged. AlphaGo’s architecture includes a policy, trained using expert moves and self-play,
and a value network that predicts game outcomes. At inference, it employs tree search guided by MC
rollouts and value network to select optimal moves. AlphaZero advances this approach by distilling
MCTS outcomes into the policy. Recent works have adapted AlphaZero’s principles to LLMs, employing similar search techniques during inference to improve responses and during training to find
better trajectories for distillation (Xie et al., 2024; Chen et al., 2024; Wan et al., 2024; Zhang et al.,
3This is reflected in the design of Gym (Towers et al., 2024), which only allows resets to the initial state.
-----
|RhoMath 1.1B|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|
|---|---|---|---|---|---|---|---|---|---|---|
||||||||||||
|53.4|||||||||||
|50.1|||||||||||
||||||||||||
|46.4|||||||||||
||||||||||||
|42.8|||||||||||
|40.3|||||||||||
||||||||||||
|RhoMath 1.1B|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|
|---|---|---|---|---|---|---|---|---|---|---|
|23.0|||||||||||
||||||||||||
|19.2|||||||||||
|18.1|||||||||||
||||||||||||
|17.3|||||||||||
||||||||||||
|15.5|||||||||||
||||||||||||
|DeepSeekMath 7B|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|
|---|---|---|---|---|---|---|---|---|---|---|
||||||||||||
|80.1|||||||||||
|78.9|||||||||||
||||||||||||
|74.4|||||||||||
|72.0|||||||||||
|69.6|||||||||||
||||||||||||
||||||||||||
|DeepSeekMath 7B|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|
|---|---|---|---|---|---|---|---|---|---|---|
||||||||||||
|46.0|||||||||||
||||||||||||
|42.8|||||||||||
||||||||||||
|37.6|||||||||||
|34.9|||||||||||
|32.8|||||||||||
||||||||||||
GSM8K MATH
RhoMath 1.1B RhoMath 1.1B
25.0
55 53.4 23.0
50.1 22.5
50
46.4 20.0 19.2
18.1
45 42.8 17.5 17.3
) 40 40.3 15.5
15.0
Initial SFT RestEM DPO [+] PPO VinePPO Initial SFT RestEM DPO [+] PPO VinePPO
DeepSeekMath 7B DeepSeekMath 7B
Accuracy (
80.1 46.0
80 78.9 45
42.8
75 74.4 40
72.0 37.6
70 69.6 35 34.9
32.8
Initial SFT RestEM DPO [+] PPO VinePPO Initial SFT RestEM DPO [+] PPO VinePPO
Figure 3: VinePPO outperforms standard PPO and other RL-free baselines on Pass@1 performance
on MATH and GSM8K datasets, while also exhibiting scalability across different model sizes.
2024; Hao et al., 2023). While this is a promising direction, our method is not an MCTS approach;
it uses MC samples solely for value estimation during PPO training to improve credit assignment.
3 BACKGROUND
We focus on the RL tuning phase in the RLHF pipeline, following Ouyang et al. (2022); Shao et al.
(2024). In this section, we provide an overview of actor-critic finetuning as implemented in PPO.
**RL Finetuning** In this setup, the policy πθ represents a language model that generates a response
**_y = [y0, . . ., yT −1] autoregressively given an input x = [x0, . . ., xM_** _−1]. The goal of RL finetuning_
is to maximize the expected undiscounted (γ = 1) finite-horizon return, while incorporating a KLdivergence constraint to regularize the policy and prevent it from deviating too far from a reference
policy πref (typically the initial supervised finetuned, SFT, model). The objective can be written as:
_J(θ) = Ex_ _,y_ _π(_ **_x) [_** (x; y)] _β KL[πθ_ _πref_ ], (1)
_∼D_ _∼_ _·|_ _R_ _−_ _∥_
where D is the dataset of prompts, R(x; y) is the complete sequence-level reward function, and β
controls the strength of the KL penalty. Note that the policy πθ is initialized from πref .
**Language Environment as an MDP** Language generation is typically modeled as a token-level
Markov Decision Process (MDP) in an actor-critic setting, where each response y is an episode.
The state at time step t, st, is the concatenation of the input prompt and the tokens generated
up to that point: st = x; y ∈S<t = [x0, . . ., xM 1, y0, . . ., yt 1]. At each time step, the action at cor_−_ _−_
responds to generating the next token yt from fixed vocabulary. The process begins with the initial
state s0 = x, and after each action, the environment transitions to the next state, st+1 = st; [at],
by appending the action at to the current state st. In this case, since states are always constructed
by concatenating tokens, the environment dynamics are known and the transition function is de_terministic, i.e., P_ (st+1|st, at) = 1. During the generation process, the reward rt is set to zero
for all intermediate actions at’s, with the sequence-level reward (x; y) only applied at the final
_R_
step when the model stops generating. A trajectory τ = (s0, a0, s1, a1, . . . ) is therefore a sequence
of state-action pairs, starting from the input prompt until the terminal state. Finally, we define the
cumulative return of a trajectory τ as R(τ ) = _t=0_ _[r][t][ =][ r][T][ −][1][ =][ R][(][x][;][ y][)][.]_
**Policy Gradient** Given this MDP formulation, policy gradient methods like PPO maximize[P][T][ −][1]
Equation 1 by repeatedly sampling trajectories and taking a step in the direction of the gradient
-----
|GSM8K|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|
|---|---|---|---|---|---|---|---|---|
||||||||||
|Increasing Compute|||||||||
|53.4|||||||||
||||||||||
|51.9|||||||||
|50.1 50.1|||||||||
||||||||||
||||||||||
|MATH|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|
|---|---|---|---|---|---|---|---|---|
||||||||||
|Increasing Compute|||||||||
||||||||||
|23.0|||||||||
|21.2|||||||||
|19.9|||||||||
||||||||||
|18.1|||||||||
||||||||||
||||||||||
GSM8K MATH
28%
Increasing Compute Increasing Compute
) 55% 25%
53.4 23.0
22%
21.2
52% 51.9
20% 19.9
Accuracy ( 50.1 50.1 18.1
50% 18%
15%
PPO VinePPO VinePPO VinePPO PPO VinePPO VinePPO VinePPO
(K = 1) (K = 3) (K = 9) (K = 1) (K = 3) (K = 9)
Figure 4: Impact of number of sampled trajectories K for estimating _V[ˆ]MC(st), evaluated on_
RhoMath 1.1B models. Increasing the number of rollouts improves task performance consistently.
**_gpg := ∇θJ(θ) to update the parameters. Policy gradient gpg takes the following form:_**
_T −1_
**_gpg = Eτ_** _πθ_ _θ log πθ(at_ _st)A(st, at)_ _,_ where st = x; y<t, at = yt, (2)
_∼_ " _t=0_ _∇_ _|_ #
X
where A(st, at) is the advantage function. If A(st, at) > 0, gpg will increase the probability
of action at in state st, and decrease it when A(st, at) < 0. Intuitively, the advantage function
quantifies how much better action at is compared to average actions taken in state st under the
policy. Formally, it is defined as:
_A(st, at) = Q(st, at) −_ _V (st) = rt + γV (st+1) −_ _V (st),_ (3)
where Q(st, at) is the state-action value and V (st) is the per-state value function[4]. The value function, V (st) : S → R, offers a long-term assessment of how desirable a particular state is under
the current policy. Formally, it represents the expected cumulative reward obtained from starting in
state st and following the policy thereafter[5]: V (st) = Eτ _πθ [R(τ_ ) _s0 = st] . PPO uses the same_
_∼_ _|_
advantage-weighted policy gradient as in Equation 2, but constrains policy updates through clipping
to ensure stable training. For full details, see Appendix A.
**Estimating Advantage via Value Networks** In practice, the advantage A(st, at) is not known
beforehand and is typically estimated by first using a value network _V[ˆ]ϕ to approximate the true_
_value function V (st), then substituting the learned values into Equation 3 or alternative methods_
like GAE (Schulman et al., 2016). The value network is parameterized by ϕ and trained alongside
the policy network πθ. The training objective for the value network minimizes the mean squared
error between the predicted value and the empirical return:
1
_Vϕ(st)_ _Gt)[2]_
2 [( ˆ] _−_
_V (ϕ) = Eτ_ _πθ_
_L_ _∼_
(4)
where Gt = _t[′]=t_ _[r][t][′][ is the empirical return from state][ s][t][. PPO uses the same objective for][ ˆ]Vϕ but_
enhances stability by applying clipping during training (see Appendix A.1 for details). In RL-tuning
of LLMs, the value network is often initialized using the initial SFT policy πref (or the reward model
[P][T][ −][1]
when available), with the language modeling head swapped out for a scalar head to predict values
(Zheng et al., 2023). This setup leverages the prior knowledge of the pretrained model.
4 ACCURATE CREDIT ASSIGNMENT WITH VINEPPO
As outlined in Section 3, a step in the PPO gradient update aims to increase the probability of
better-than-average actions while decreasing the probability of those that perform worse—a process
quantified by the advantage A(st, at). However, the true advantage is generally unknown and must
be estimated, typically by substituting estimates from a value network into Equation 3. As we will
4Such derivation is possible as the language environment is deterministic.
5We drop the dependency on πθ for brevity.
-----
RhoMath 1.1B DeepSeekMath 7B
) 24% 44%
Method
20% 40% VinePPO
PPO
Accuracy ( 16% 36%
12% 32%
0 200 400 600 800 1000 0 200 400 600 800 1000
Training Step
Figure 5: Comparison of the training behavior between VinePPO and PPO. VinePPO demonstrates
consistently higher accuracy (as measured on the test set of MATH dataset) throughout the training.
Refer to Appendix D for more detailed plots.
elaborate in Section 7, value networks are often inaccurate and result in biased value computation.
Fortunately, the language environment as an MDP (Section 3) offers a useful property that allows for
unbiased estimation of V (st). Since states are simply concatenated tokens, we can prompt the language model πθ to generate continuations from any intermediate state. This flexibility allows us to
explore alternative future paths from arbitrary points in a generation Moreover, recent advancements
in LLM inference engines (Kwon et al., 2023; Zheng et al., 2024) have dramatically increased the
speed of on-the-fly response generation[6]. This computational efficiency makes it feasible to conduct
fast environment simulation, opening up unique opportunities for RL training of LLMs.VinePPO
uses this property and estimates advantage via MC sampling. It only modifies the way advantages
are estimated, leaving the rest of the standard PPO pipeline intact (Figure 2).
We start by estimating the true value V (st). Instead of relying on a value network, for any intermediate state st, we sample K independent trajectories τ _[k]’s. The average return across these trajectories_
serves as the value estimate:
_VˆMC(st) := [1]_
_R(τ_ _[k]),_ where τ [1], . . ., τ _[K]_ _πθ(_ _st)._ (5)
_∼_ _· |_
_k=1_
X
This is a MC estimate of V (st) = E [R(τ ) | s0 = st] . Note that these trajectories are not trained on.
Once the value _V[ˆ]MC(st) is computed, we estimate the advantages of each action using Equation 3:_
_AˆMC(st, at) := r(st, at) + γV[ˆ]MC(st+1)_ _VMC(st)._ (6)
_−_ [ˆ]
For any K ≥ 1, the policy gradient computed using the advantage estimator _A[ˆ]MC is an unbiased_
estimate of the gradient of expected return gpg. To enhance the efficiency of _A[ˆ]MC, we group states_
within a reasoning step and compute a single advantage, which is assigned to all tokens in that step
(examples in Appendix B). This trades off granularity for efficiency, allowing finer resolution with
more compute, or coarser estimates with limited resources. The parameter K also offers another
trade-off between computational cost (i.e. more MC samples per state) and the variance of the
estimator. As shown in Section 6.1, even K = 1 performs well.
In essence, VinePPO is a straightforward modification to the PPO pipeline, altering only the advantage computation. This minimal adjustment allows us to leverage PPO’s benefits while enabling a
systematic evaluation of the effect of unbiased advantage estimation and improved credit assignment. In the following sections, we compare various aspects such as task performance, computational efficiency, KL divergence, and robustness to shed light on the nature of these approaches.
5 EXPERIMENTAL SETUP
**Datasets and Pretrained LLMs** We conduct our experiments using LLMs that show strong performance on mathematical reasoning: DeepSeekMath 7B (Shao et al., 2024) and RhoMath 1.1B
6up to 5K tokens/second on a single Nvidia A100 GPU for a 7B LLM loaded in bfloat16.
-----
RhoMath 1.1B DeepSeekMath 7B
45%
) 22.5%
Method
20.0% 40% VinePPO
17.5% PPO
35%
Accuracy (
15.0%
30%
0 10 20 30 0 5 10 15 20
KL[ ref[]]
Figure 6: Task accuracy as a function of KL divergence during training on the MATH dataset.
VinePPO achieves higher accuracy, reflecting more efficient credit assignment and focused updates.
(Lin et al., 2024), both of which have been trained on diverse mathematical and natural language
corpora. Having different sized models allows evaluating the effect of scaling. We focus on mathematical reasoning datasets MATH (Hendrycks et al., 2021), consisting of competition-level mathematical problems, and GSM8K (Cobbe et al., 2021), containing simpler grade-school level math
word problems. Both datasets are well-established and present a range of difficulty levels that allow
for comprehensive evaluation. For each dataset, we finetune the base LLMs on its respective training
sets to obtain the initial SFT policy (πref ). In all experiments, we employ full-parameter finetuning
to allow utilization of models’ full capacity (Sun et al., 2023; Biderman et al., 2024).
**Evaluation** We evaluate model performance on the test sets of each dataset, using accuracy
(Pass@1) as our primary metric, which measures the correctness of the final answers produced by
the models. As our baseline, we adopt the standard PPO framework, as commonly implemented for
LLM finetuning (Ouyang et al., 2022; Huang et al., 2024). Additionally, we compare them against
RL-free methods that doesn’t have explicit credit assignment mechanisms: RestEM (Singh et al.,
2024), a form of Iterative Rejection Finetuning (Yuan et al., 2023; Anthony et al., 2017) and DPO[+]
(Pal et al., 2024), variant of DPO with strong performance on reasoning tasks. All methods are
initialized from the same SFT checkpoint to ensure a fair comparison.
**Training Details and Hyperparameters** To ensure standard PPO (and its value network) has a
healthy training and our evaluation reflects its full potential, we first focus our hyperparameter search
on PPO parameters (such as KL penalty coefficient, batch size, minibatch size, GAE λ, number of
epochs per iteration) and apply all well-known techniques and best practices (Huang et al., 2024;
Ivison et al., 2024) in PPO tuning (Refer to Appendix C.2 for the full list). Following previous work
(Pal et al., 2024; Singh et al., 2024), we set the task reward R to be a binary function that only checks
final answer against the ground truth. VinePPO borrows the exact same hyperparameters from PPO
and only modifies the advantage A(st, at) estimation, keeping the rest of the pipeline unchanged.
This allows us to isolate the effect of accurate credit assignment. We found that sampling K = 9
trajectories in _V[ˆ]MC performs well; the effect of varying K is fully analyzed in Section 6.1. For the_
other baseline, we closely follow the original setup while ensuring consistency in training conditions
for a fair comparison. We choose the best checkpoint based on a held-out validation set for all
experiments. Full implementation details, including all hyperparameters and training procedures,
are provided in Appendix C.6.
6 RESULTS
We evaluate the effect of accurate credit assignment on four key measures of model finetuning efficiency and success: task performance, KL divergence, temperature tolerance, and computational
efficiency. Our experimental setup is designed to control for and isolate the impact of credit assignment on each of these measures.
-----
|Col1|Col2|Col3|Col4|
|---|---|---|---|
|||3.0x Faster||
|||||
|||||
|Col1|Col2|Col3|1.51x Faster|Col5|
|---|---|---|---|---|
||||||
||||||
||||||
||||||
||||||
||||||
RhoMath 1.1B DeepSeekMath 7B
24%
1.51x Faster
44%
Method
20% 40%
3.0x Faster VinePPO
Accuracy PPO
36%
16%
32%
0 10 20 0 25 50 75
Wall Clock (Hours)
Figure 7: Accuracy vs. Wall Clock Time for both methods measured on the same hardware (shown
only up to PPO’s final performance). Despite VinePPO taking longer per iteration (up to 2x for 7B
and 5x for 1.1B models), it passes PPO’s peak performance in fewer iterations and less overall time.
6.1 TASK PERFORMANCE
VinePPO consistently outperforms standard PPO throughout training (Figure 5) and other baselines
(Figure 3). More importantly, its performance gap widens in MATH which is a much more challenging reasoning task. Unlike VinePPO and PPO, DPO[+] and RestEM lacks any explicit mechanisms
for credit assignment, opting instead to finetune the model on the full trajectory. Our experiments
show that these RL-free methods lags behind both PPO-based methods. For RestEM, the absence
of targeted credit assignments likely leads to overfitting (Appendix C.5).
To assess the impact of K, the number of MC samples used to estimate the value, we run an ablation
on RhoMath 1.1B, varying K from 1 to 3 and then to 9. As shown in Figure 4, VinePPO demonstrates improved performance with higher K values, as more MC samples reduce the variance of
the _A[ˆ]MC estimator. Notably, increasing K provides a reliable approach to leveraging additional_
computational resources for better performance.
6.2 KL DIVERGENCE
The RL objective (Equation 1) balances maximizing task performance while constraining deviations from the initial policy
_πref_, measured by KL divergence. We analyze how VinePPO
and PPO navigate this trade-off by plotting task accuracy
against KL divergence KL[πθ _πref_ ] throughout training (Fig_∥_
ure 6). Results show VinePPO consistently achieves higher accuracy at same KL divergence, indicating more efficient use of
the “KL budget.” This efficiency stems from VinePPO’s more
precise credit assignment. As shown in Figure 1, many advantages are zero, and VinePPO excludes these steps from the
loss. By avoiding unnecessary updates on non-contributing tokens, VinePPO reduces non-essential parameter adjustments
that would inflate KL. See Appendix D.1 for full results.
6.3 TEMPERATURE TOLERANCE
|45% ) 40% ( Accuracy 35% Initial SFT 30% 0 100 200 300 Training Step Method VinePPO PPO Temparature 0.6 0.8 1.0|Col2|Col3|Col4|Col5|Col6|
|---|---|---|---|---|---|
|||||||
|||||||
|||||||
|||||Ini|tial SFT|
|||||||
|||||||
Figure 8: Test set accuracy dur
Sampling temperature is a critical hyperparameter controlling ing training with higher temperathe randomness of sampled trajectories. At higher temper- ture presented for DeepSeekMath
atures models generates more diverse trajectories, accelerat- 7B and MATH dataset. VinePPO
ing early training through increased exploration. However, can tolerate higher temperatures.
this diversity challenges PPO’s value network, requiring generalization over a wider range of states. We compared VinePPO and PPO using temperatures
_T ∈{0.6, 0.8, 1.0} over the initial third of training steps. Figure 8 shows VinePPO consistently_
benefits from higher temperatures, achieving faster convergence. Conversely, PPO fails to benefit
from increased exploration and even diverges at T = 1.0, where trajectories are most diverse.
-----
Training Step: 60 Training Step: 360 Training Step: 540 Training Step: 960
1.00
0.75
0.50
0.25
0.00
|MA|E: 0.2|7|Col4|Col5|Col6|
|---|---|---|---|---|---|
|||||||
|||||||
|||||||
|||||||
|||||||
1.00
|MA|E: 0.1|6|Col4|Col5|Col6|
|---|---|---|---|---|---|
|||||||
|||||||
|||||||
|||||||
|||||||
|MA|E: 0.1|6|Col4|Col5|Col6|
|---|---|---|---|---|---|
|||||||
|||||||
|||||||
|||||||
|||||||
|MA|E: 0.0|9|Col4|Col5|Col6|
|---|---|---|---|---|---|
|||||||
|||||||
|||||||
|||||||
|||||||
0.75
0.50
0.25
0.00
|MA|E: 0.0|4|Col4|Col5|Col6|
|---|---|---|---|---|---|
|||||||
|||||||
|||||||
|||||||
|||||||
|MA|E: 0.0|4|Col4|Col5|Col6|
|---|---|---|---|---|---|
|||||||
|||||||
|||||||
|||||||
|||||||
|MA|E: 0.0|4|Col4|Col5|Col6|
|---|---|---|---|---|---|
|||||||
|||||||
|||||||
|||||||
|||||||
|MA|E: 0.0|1|Col4|Col5|Col6|
|---|---|---|---|---|---|
|||||||
|||||||
|||||||
|||||||
|||||||
0.00 0.25 0.50 0.75 1.00
0.00 0.25 0.50 0.75 1.00
0.00 0.25 0.50 0.75 1.00
0.00 0.25 0.50 0.75 1.00
Ground Truth Value
Figure 9: Distribution of predicted values for each state vs. ground truth (computed using 256 MC
samples) during training for DeepSeekMath 7B on MATH dataset, highlighting the nature of errors.
VinePPO achieves much lower Mean Absolute Error (MAE).
6.4 COMPUTATIONAL EFFICIENCY
VinePPO and PPO require different resources: PPO uses a separate value network, requiring two
times more GPU memory (up to 112GB for a 7B LLM, considering both model and its optimizer);
VinePPO, conversely, relies on MC samples. This skips value network’s memory requirements, but
shifts the computational burden to increased LLM inferences, making VinePPO generally slower
per iteration (up to 5x for RhoMath 1.1B and 2x for DeepSeekMath 7B). However, the effect
of VinePPO’s accurate credit assignment is substantial. Although slower per iteration, VinePPO
achieves PPO’s peak accuracy in fewer gradient steps and less wall-clock time. Figure 7 shows
RhoMath 1.1B and DeepSeekMath 7B require about 3.0x and 1.51x less time and 9x and 2.8x fewer
steps. This improvement occurs despite all hyperparameters being tuned for PPO. Therefore, switching to VinePPO offers a way to enhance performance within the same compute budget and serves as
the only option when memory is constrained.
7 VALUE PREDICTION ANALYSIS
In this section, we explore the underlying reasons for the performance gap between PPO and
VinePPO by closely analyzing the value prediction of both methods. First, we establish a “ground
_truth” value at each reasoning step within trajectories by running many MC samples (256 in our_
case) and averaging the returns. This provides a low-variance reference value. We then compare the
value predictions in both methods against this ground truth. We present the results for DeepSeekMath 7B on the MATH dataset (full analysis with other models and datasets in Appendix D.2).
**Accuracy** Figure 9 presents the distribution of value predictions at each reasoning step. The errors produced by VinePPO and PPO differ significantly. VinePPO’s estimates are unbiased, with
variance peaking at 0.5 and dropping to zero at 0 and 1. PPO’s value network shows high bias, often
misclassifying bad states (ground truth near 0) as good and vice versa. To further visualize accuracy,
we classify a value prediction as “correct” if it falls within 0.05 of the ground truth. The accuracy of
this formulation is shown in Figure 11.a. PPO’s value network starts with low accuracy, gradually
improving to 65%. VinePPO, however, consistently achieves 70-90% accuracy throughout training.
**Top Action Identification** In value-based RL, ranking actions correctly is more crucial than absolute value accuracy. While PPO, as a policy gradient method, requires accurate value estimates
to compute meaningful advantages, it is still a compelling question whether PPO’s value network,
despite its bias, can maintain correct action ranking. To investigate, we sample five new next steps
-----
PPO @ Step 60 PPO @ Step 540 PPO @ Step 960 VinePPO @ Step 60 VinePPO @ Step 540 VinePPO @ Step 960
0.08 0.03
0.30 0.20 0.10 0.06 0.08
0.18 0.09 0.02
0.25 0.16 0.08 0.04 0.05
MAE 0.20 0.14 0.07 0.02 0.03 0.01
0.15 0.12 0.06 0.00 0.00 0.00
0.0 0.5 1.0 0.0 0.5 1.0 0.0 0.5 1.0 0.0 0.5 1.0 0.0 0.5 1.0 0.0 0.5 1.0
Reasoning Progress
Figure 10: Visualizing the Mean Absolute Error (MAE) of the value predictions at different point
of the reasoning chain. Value Network in PPO fails to generalize as the reasoning chain progresses,
while VinePPO’s value estimates become more accurate as the model become more deterministic.
|Col1|(b) Top|Action|Identificat|tion|Col6|
|---|---|---|---|---|---|
|||||||
|||||||
|||||||
|||||||
|||||||
|Random|Chance|||||
|(a) Value Prediction 100% 75% Accuracy 50% 25% 0% 250 500 750 1000 Step|(b) Top Action Identification 100% 75% Method Accuracy VinePPO 50% PPO 25% Random Chance 0% 300 400 500 600 700 Step|
|---|---|
Figure 11: (a) Value prediction accuracy formulated as a classification problem, where a prediction
is considered correct if it falls within 0.05 of the ground truth. (b) Accuracy of identifying the top
action in a set of five possible next states. VinePPO consistently outperforms the value network.
from the same initial state and evaluate if the method correctly identifies the resulting next state
with the highest ground truth value. As shown in Figure 11.b, PPO’s value network performs near
chance levels for much of the training, with slight improvements over time. In contrast, VinePPO
consistently identifies the top action with high accuracy throughout training.
**Error Per Reasoning Step** To understand value computation mechanisms, we visualize the prediction error at each reasoning step within a trajectory. As shown in Figure 10, PPO’s estimation
error increases as reasoning progresses. We hypothesize this occurs because early steps have lower
diversity and resemble training data more, allowing the value network to rely on memorization.
Later, as space of states become much larger, they become unfamiliar and the network struggles to
generalize. VinePPO’s prediction error decreases with reasoning progression. We attribute this to
the model becoming more deterministic in later steps as it conditions on bigger and longer context.
This determinism enables more accurate estimates from the same number of MC samples.
8 DISCUSSION
Accurate credit assignment has profound implications on the performance of RL tuning of LLMs.
As we’ve demonstrated, standard PPO, despite outperforming most RL-free baselines, suffers from
suboptimal value estimation. More importantly, its scaling behavior is concerning; PPO struggles
with increasingly diverse trajectories and tends to perform worse as tasks become more complex.
VinePPO, on the other hand, is a viable alternative. As shown in Section 6.4, it offers lowered memory requirements and better performance with the same computational budget. VinePPO could also
be a particularly attractive option for frontier LLMs as even doubling the post-training compute is
negligible compared to their pre-training costs (Ouyang et al., 2022)[7]. Given the major investments
in pre-training compute and data collection of these models, it is imperative for model developers
to employ post-training methods that provide more accurate updates, avoiding the high-variance
adjustments caused by inferior credit assignment. Additionally, VinePPO offers a straightforward
7For example, InstructGPT used nearly 60 times more compute for pre-training (Ouyang et al., 2022).
-----
scaling axis: increasing the number of MC samples directly enhances performance with additional
compute. Unlike recent approaches that focus on increasing inference-time compute to boost performance (OpenAI, 2024; Bansal et al., 2024), VinePPO’s training compute is amortized over all
future inferences. Note that the computational workload of VinePPO is highly parallelizable with
linear scalability, making it well-suited for large-scale training.
The unique properties of the language environment are what enabled VinePPO to be viable credit
assignment option; it may have limited practical use in traditional deep RL policy gradient methods.
This suggests that adapting RL techniques to LLMs requires careful consideration and perhaps a
reevaluation of underlying assumptions. Overall, our work highlights the potential of well-tuned RL
finetuning strategies with proper credit assignment, and we hope it encourages further research into
optimizing RL post-training pipelines for LLMs.
ACKNOWLEDGEMENTS
We thank Matheus Pereira for his efforts on facilitating experimentation. AC and NR are supported
by CIFAR AI Chair. SR is supported by a Facebook CIFAR AI Chair and NSERC Discovery
Grant program. We thank Mila IDT team and Digital Research Alliance of Canada for the compute
provided for experimentation.
REFERENCES
Arash Ahmadian, Chris Cremer, Matthias Gall´e, Marzieh Fadaee, Julia Kreutzer, Olivier Pietquin,
Ahmet Ust¨[¨] [un, and Sara Hooker. 2024. Back to Basics: Revisiting REINFORCE-style Optimiza-](https://doi.org/10.18653/V1/2024.ACL-LONG.662)
[tion for Learning from Human Feedback in LLMs. In Proceedings of the 62nd Annual Meeting](https://doi.org/10.18653/V1/2024.ACL-LONG.662)
_of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2024, pages_
12248–12267, Bangkok, Thailand. Association for Computational Linguistics.
[Thomas Anthony, Zheng Tian, and David Barber. 2017. Thinking Fast and Slow with Deep Learning](https://proceedings.neurips.cc/paper/2017/hash/d8e1344e27a5b08cdfd5d027d9b8d6de-Abstract.html)
[and Tree Search. In Advances in Neural Information Processing Systems 30: Annual Conference](https://proceedings.neurips.cc/paper/2017/hash/d8e1344e27a5b08cdfd5d027d9b8d6de-Abstract.html)
_on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA,, pages_
5360–5370, USA.
Hritik Bansal, Arian Hosseini, Rishabh Agarwal, Vinh Q. Tran, and Mehran Kazemi. 2024.
[Smaller, Weaker, Yet Better: Training LLM Reasoners via Compute-optimal Sampling. CoRR,](https://doi.org/10.48550/ARXIV.2408.16737)
abs/2408.16737.
Dan Biderman, Jose Javier Gonzalez Ortiz, Jacob Portes, Mansheej Paul, Philip Greengard, Connor
Jennings, Daniel King, Sam Havens, Vitaliy Chiley, Jonathan Frankle, Cody Blakeney, and John P.
[Cunningham. 2024. LoRA Learns Less and Forgets Less. CoRR, abs/2405.09673.](https://doi.org/10.48550/ARXIV.2405.09673)
[Guoxin Chen, Minpeng Liao, Chengxi Li, and Kai Fan. 2024. AlphaMath Almost Zero: process](https://doi.org/10.48550/ARXIV.2405.03553)
[Supervision without process. CoRR, abs/2405.03553.](https://doi.org/10.48550/ARXIV.2405.03553)
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser,
Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John
[Schulman. 2021. Training Verifiers to Solve Math Word Problems. CoRR, abs/2110.14168.](http://arxiv.org/abs/2110.14168)
[Evan Greensmith, Peter L. Bartlett, and Jonathan Baxter. 2001. Variance Reduction Techniques for](https://proceedings.neurips.cc/paper/2001/hash/584b98aac2dddf59ee2cf19ca4ccb75e-Abstract.html)
[Gradient Estimates in Reinforcement Learning. In Advances in Neural Information Processing](https://proceedings.neurips.cc/paper/2001/hash/584b98aac2dddf59ee2cf19ca4ccb75e-Abstract.html)
_Systems 14 [Neural Information Processing Systems: Natural and Synthetic, NIPS 2001, Decem-_
_ber 3-8, 2001, pages 1507–1514, Vancouver, British Columbia, Canada. MIT Press._
Shibo Hao, Yi Gu, Haodi Ma, Joshua Jiahua Hong, Zhen Wang, Daisy Zhe Wang, and Zhiting Hu.
[2023. Reasoning with Language Model is Planning with World Model. In Proceedings of the](https://doi.org/10.18653/V1/2023.EMNLP-MAIN.507)
_2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023, pages_
8154–8173, Singapore. Association for Computational Linguistics.
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song,
[and Jacob Steinhardt. 2021. Measuring Mathematical Problem Solving With the MATH Dataset.](https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/hash/be83ab3ecd0db773eb2dc1b0a17836a1-Abstract-round2.html)
In Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks
_1, NeurIPS Datasets and Benchmarks 2021._
-----
Shengyi Huang, Michael Noukhovitch, Arian Hosseini, Kashif Rasul, Weixun Wang, and Lewis
[Tunstall. 2024. The N+ Implementation Details of RLHF with PPO: A Case Study on TL;DR](https://doi.org/10.48550/ARXIV.2403.17031)
[Summarization. CoRR, abs/2403.17031.](https://doi.org/10.48550/ARXIV.2403.17031)
[Hyeonbin Hwang, Doyoung Kim, Seungone Kim, Seonghyeon Ye, and Minjoon Seo. 2024. Self-](https://doi.org/10.48550/ARXIV.2404.10346)
[explore to Avoid the Pit: Improving the Reasoning Capabilities of Language Models with Fine-](https://doi.org/10.48550/ARXIV.2404.10346)
[grained Rewards. CoRR, abs/2404.10346.](https://doi.org/10.48550/ARXIV.2404.10346)
Hamish Ivison, Yizhong Wang, Jiacheng Liu, Zeqiu Wu, Valentina Pyatkin, Nathan Lambert,
[Noah A. Smith, Yejin Choi, and Hannaneh Hajishirzi. 2024. Unpacking DPO and PPO: Dis-](https://doi.org/10.48550/ARXIV.2406.09279)
[entangling Best Practices for Learning from Preference Feedback. CoRR, abs/2406.09279.](https://doi.org/10.48550/ARXIV.2406.09279)
Carlos E. Jimenez, John Yang, Alexander Wettig, Shunyu Yao, Kexin Pei, Ofir Press, and Karthik R.
[Narasimhan. 2024. SWE-bench: Can Language Models Resolve Real-world Github Issues? In](https://openreview.net/forum?id=VTF8yNQM66)
_The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna, Austria._
OpenReview.net.
Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph
[Gonzalez, Hao Zhang, and Ion Stoica. 2023. Efficient Memory Management for Large Language](https://doi.org/10.1145/3600006.3613165)
[Model Serving with PagedAttention. In Proceedings of the 29th Symposium on Operating Systems](https://doi.org/10.1145/3600006.3613165)
_Principles, SOSP 2023, pages 611–626, Koblenz, Germany. ACM._
Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay V.
Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, Yuhuai Wu, Behnam
[Neyshabur, Guy Gur-Ari, and Vedant Misra. 2022. Solving Quantitative Reasoning Problems](http://papers.nips.cc/paper_files/paper/2022/hash/18abbeef8cfe9203fdf9053c9c4fe191-Abstract-Conference.html)
[with Language Models.](http://papers.nips.cc/paper_files/paper/2022/hash/18abbeef8cfe9203fdf9053c9c4fe191-Abstract-Conference.html) In Advances in Neural Information Processing Systems 35: Annual
_Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA,_
USA.
Hunter Lightman, Vineet Kosaraju, Yuri Burda, Harrison Edwards, Bowen Baker, Teddy Lee, Jan
[Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. 2024. Let’s Verify Step by Step. In](https://openreview.net/forum?id=v8L0pN6EOi)
_The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna, Austria._
OpenReview.net.
Zhenghao Lin, Zhibin Gou, Yeyun Gong, Xiao Liu, Yelong Shen, Ruochen Xu, Chen Lin, Yujiu
[Yang, Jian Jiao, Nan Duan, and Weizhu Chen. 2024. Rho-1: Not All Tokens Are What You](https://doi.org/10.48550/ARXIV.2404.07965)
[Need. CoRR, abs/2404.07965.](https://doi.org/10.48550/ARXIV.2404.07965)
Liangchen Luo, Yinxiao Liu, Rosanne Liu, Samrat Phatale, Harsh Lara, Yunxuan Li, Lei Shu, Yun
Zhu, Lei Meng, Jiao Sun, and Abhinav Rastogi. 2024. [Improve Mathematical Reasoning in](https://doi.org/10.48550/ARXIV.2406.06592)
[Language Models by Automated Process Supervision. CoRR, abs/2406.06592.](https://doi.org/10.48550/ARXIV.2406.06592)
Qianli Ma, Haotian Zhou, Tingkai Liu, Jianbo Yuan, Pengfei Liu, Yang You, and Hongxia Yang.
[2023. Let’s reward step by step: Step-level reward model as the Navigators for Reasoning. CoRR,](https://doi.org/10.48550/ARXIV.2310.10080)
abs/2310.10080.
[OpenAI. 2024. OpenAI o1 System Card.](https://api.semanticscholar.org/CorpusID:272684752)
Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin,
Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser
Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F. Christiano, Jan
[Leike, and Ryan Lowe. 2022. Training language models to follow instructions with human feed-](http://papers.nips.cc/paper_files/paper/2022/hash/b1efde53be364a73914f58805a001731-Abstract-Conference.html)
[back. In Advances in Neural Information Processing Systems 35: Annual Conference on Neural](http://papers.nips.cc/paper_files/paper/2022/hash/b1efde53be364a73914f58805a001731-Abstract-Conference.html)
_Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA._
Arka Pal, Deep Karkhanis, Samuel Dooley, Manley Roberts, Siddartha Naidu, and Colin White.
[2024. Smaug: Fixing Failure Modes of Preference Optimisation with DPO-positive. CoRR,](https://doi.org/10.48550/ARXIV.2402.13228)
abs/2402.13228.
[Qwen. 2024. Qwen2.5-Math: The world’s leading open-sourced mathematical LLMs. https:](https://qwenlm.github.io/blog/qwen2.5-math/)
[//qwenlm.github.io/blog/qwen2.5-math/. Accessed: 2024-09-23.](https://qwenlm.github.io/blog/qwen2.5-math/)
-----
Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D. Manning, Stefano Ermon, and
[Chelsea Finn. 2023. Direct Preference Optimization: Your Language Model is Secretly a Re-](http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html)
[ward Model. In Advances in Neural Information Processing Systems 36: Annual Conference on](http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html)
_Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA._
[John Schulman. 2020. Notes on the KL-divergence Approximation. http://joschu.net/](http://joschu.net/blog/kl-approx.html)
[blog/kl-approx.html. Accessed: 2024-09-23.](http://joschu.net/blog/kl-approx.html)
[John Schulman, Sergey Levine, Pieter Abbeel, Michael I. Jordan, and Philipp Moritz. 2015. Trust](http://proceedings.mlr.press/v37/schulman15.html)
[Region Policy Optimization. In Proceedings of the 32nd International Conference on Machine](http://proceedings.mlr.press/v37/schulman15.html)
_Learning, ICML 2015, volume 37 of JMLR Workshop and Conference Proceedings, pages 1889–_
1897, Lille, France. JMLR.org.
[John Schulman, Philipp Moritz, Sergey Levine, Michael I. Jordan, and Pieter Abbeel. 2016. High-](http://arxiv.org/abs/1506.02438)
[dimensional Continuous Control Using Generalized Advantage Estimation. In 4th International](http://arxiv.org/abs/1506.02438)
_Conference on Learning Representations, ICLR 2016Proceedings, San Juan, Puerto Rico._
[John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017. Proximal](http://arxiv.org/abs/1707.06347)
[Policy Optimization Algorithms. CoRR, abs/1707.06347.](http://arxiv.org/abs/1707.06347)
Amrith Setlur, Saurabh Garg, Xinyang Geng, Naman Garg, Virginia Smith, and Aviral Kumar. 2024.
[RL on Incorrect Synthetic Data Scales the Efficiency of LLM Math Reasoning by Eight-fold.](https://doi.org/10.48550/ARXIV.2406.14532)
_CoRR, abs/2406.14532._
Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Mingchuan Zhang, Y. K. Li,
[Y. Wu, and Daya Guo. 2024. DeepSeekMath: Pushing the Limits of Mathematical Reasoning in](https://doi.org/10.48550/ARXIV.2402.03300)
[Open Language Models. CoRR, abs/2402.03300.](https://doi.org/10.48550/ARXIV.2402.03300)
David Silver, Aja Huang, Chris J. Maddison, Arthur Guez, Laurent Sifre, George van den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Vedavyas Panneershelvam, Marc Lanctot, Sander
Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy P. Lillicrap,
[Madeleine Leach, Koray Kavukcuoglu, Thore Graepel, and Demis Hassabis. 2016. Mastering the](https://doi.org/10.1038/NATURE16961)
[game of Go with deep neural networks and tree search. Nat., 529(7587):484–489.](https://doi.org/10.1038/NATURE16961)
David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez,
Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, Timothy P. Lillicrap, Karen
[Simonyan, and Demis Hassabis. 2017. Mastering Chess and Shogi by Self-play with a General](http://arxiv.org/abs/1712.01815)
[Reinforcement Learning Algorithm. CoRR, abs/1712.01815.](http://arxiv.org/abs/1712.01815)
Avi Singh, John D. Co-Reyes, Rishabh Agarwal, Ankesh Anand, Piyush Patil, Xavier Garcia, Peter J. Liu, James Harrison, Jaehoon Lee, Kelvin Xu, Aaron T. Parisi, Abhishek Kumar, Alexander A. Alemi, Alex Rizkowsky, Azade Nova, Ben Adlam, Bernd Bohnet, Gamaleldin Fathy Elsayed, Hanie Sedghi, Igor Mordatch, Isabelle Simpson, Izzeddin Gur, Jasper Snoek, Jeffrey Pennington, Jiri Hron, Kathleen Kenealy, Kevin Swersky, Kshiteej Mahajan, Laura Culp, Lechao
Xiao, Maxwell L. Bileschi, Noah Constant, Roman Novak, Rosanne Liu, Tris Warkentin, Yundi
Qian, Yamini Bansal, Ethan Dyer, Behnam Neyshabur, Jascha Sohl-Dickstein, and Noah Fiedel.
[2024. Beyond Human Data: Scaling Self-training for Problem-solving with Language Models.](https://openreview.net/forum?id=lNAyUngGFK)
_Transactions on Machine Learning Research, 2024._
[Xianghui Sun, Yunjie Ji, Baochang Ma, and Xiangang Li. 2023. A Comparative Study between Full-](https://doi.org/10.48550/ARXIV.2304.08109)
[parameter and LoRA-based Fine-tuning on Chinese Instruction Data for Instruction Following](https://doi.org/10.48550/ARXIV.2304.08109)
[Large Language Model. CoRR, abs/2304.08109.](https://doi.org/10.48550/ARXIV.2304.08109)
[Richard S. Sutton and Andrew G. Barto. 1998. Introduction to Reinforcement Learning. In Intro-](https://api.semanticscholar.org/CorpusID:261579713)
_duction to Reinforcement Learning._
[Richard S. Sutton, David A. McAllester, Satinder Singh, and Yishay Mansour. 1999. Policy Gradient](http://papers.nips.cc/paper/1713-policy-gradient-methods-for-reinforcement-learning-with-function-approximation)
[Methods for Reinforcement Learning with Function Approximation. In Advances in Neural In-](http://papers.nips.cc/paper/1713-policy-gradient-methods-for-reinforcement-learning-with-function-approximation)
_formation Processing Systems 12, [NIPS Conference, pages 1057–1063, Denver, Colorado, USA._
The MIT Press.
-----
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher,
Cristian Canton-Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy
Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn,
[et al. 2023. Llama 2: Open Foundation and Fine-tuned Chat Models. CoRR, abs/2307.09288.](https://doi.org/10.48550/ARXIV.2307.09288)
Mark Towers, Ariel Kwiatkowski, Jordan Terry, John U Balis, Gianluca De Cola, Tristan Deleu,
Manuel Goul˜ao, Andreas Kallinteris, Markus Krimmel, Arjun KG, et al. 2024. Gymnasium: A
standard interface for reinforcement learning environments. arXiv preprint arXiv:2407.17032.
[Luong Quoc Trung, Xinbo Zhang, Zhanming Jie, Peng Sun, Xiaoran Jin, and Hang Li. 2024. ReFT:](https://doi.org/10.18653/V1/2024.ACL-LONG.410)
[Reasoning with Reinforced Fine-tuning. In Proceedings of the 62nd Annual Meeting of the As-](https://doi.org/10.18653/V1/2024.ACL-LONG.410)
_sociation for Computational Linguistics (Volume 1: Long Papers), ACL 2024, pages 7601–7614,_
Bangkok, Thailand. Association for Computational Linguistics.
Jonathan Uesato, Nate Kushman, Ramana Kumar, H. Francis Song, Noah Y. Siegel, Lisa Wang,
[Antonia Creswell, Geoffrey Irving, and Irina Higgins. 2022. Solving math word problems with](https://doi.org/10.48550/ARXIV.2211.14275)
[process- and outcome-based feedback. CoRR, abs/2211.14275.](https://doi.org/10.48550/ARXIV.2211.14275)
Ziyu Wan, Xidong Feng, Muning Wen, Stephen Marcus McAleer, Ying Wen, Weinan Zhang, and
[Jun Wang. 2024. AlphaZero-like Tree-search can Guide Large Language Model Decoding and](https://openreview.net/forum?id=C4OpREezgj)
[Training. In Forty-first International Conference on Machine Learning, ICML 2024, Vienna,](https://openreview.net/forum?id=C4OpREezgj)
Austria. OpenReview.net.
Peiyi Wang, Lei Li, Zhihong Shao, R. X. Xu, Damai Dai, Yifei Li, Deli Chen, Y. Wu, and Zhifang
[Sui. 2024. Math-shepherd: Verify and reinforce llms step-by-step without human annotations.](http://arxiv.org/abs/2312.08935)
_CoRR, abs/2406.06592._
Yuxi Xie, Anirudh Goyal, Wenyue Zheng, Min-Yen Kan, Timothy P. Lillicrap, Kenji Kawaguchi,
[and Michael Shieh. 2024. Monte Carlo Tree Search Boosts Reasoning via Iterative Preference](https://doi.org/10.48550/ARXIV.2405.00451)
[Learning. CoRR, abs/2405.00451.](https://doi.org/10.48550/ARXIV.2405.00451)
Shusheng Xu, Wei Fu, Jiaxuan Gao, Wenjie Ye, Weilin Liu, Zhiyu Mei, Guangju Wang, Chao Yu,
[and Yi Wu. 2024. Is DPO Superior to PPO for LLM Alignment? A Comprehensive Study. In](https://openreview.net/forum?id=6XH8R7YrSk)
_Forty-first International Conference on Machine Learning, ICML 2024, Vienna, Austria. Open-_
Review.net.
Zheng Yuan, Hongyi Yuan, Chengpeng Li, Guanting Dong, Chuanqi Tan, and Chang Zhou. 2023.
[Scaling Relationship on Learning Mathematical Reasoning with Large Language Models. CoRR,](https://doi.org/10.48550/ARXIV.2308.01825)
abs/2308.01825.
[Dan Zhang, Sining Zhoubian, Yisong Yue, Yuxiao Dong, and Jie Tang. 2024. ReST-MCTS*: LLM](https://doi.org/10.48550/ARXIV.2406.03816)
[Self-training via Process Reward Guided Tree Search. CoRR, abs/2406.03816.](https://doi.org/10.48550/ARXIV.2406.03816)
Lianmin Zheng, Liangsheng Yin, Zhiqiang Xie, Chuyue Sun, Jeff Huang, Cody Hao Yu, Shiyi Cao,
[Christos Kozyrakis, Ion Stoica, Joseph E. Gonzalez, Clark Barrett, and Ying Sheng. 2024. Sglang:](http://arxiv.org/abs/2312.07104)
[Efficient execution of structured language model programs. CoRR, abs/2312.07104.](http://arxiv.org/abs/2312.07104)
Rui Zheng, Shihan Dou, Songyang Gao, Yuan Hua, Wei Shen, Binghai Wang, Yan Liu, Senjie Jin,
Qin Liu, Yuhao Zhou, Limao Xiong, Lu Chen, Zhiheng Xi, Nuo Xu, Wenbin Lai, Minghao Zhu,
Cheng Chang, Zhangyue Yin, Rongxiang Weng, Wensen Cheng, Haoran Huang, Tianxiang Sun,
[Hang Yan, Tao Gui, Qi Zhang, Xipeng Qiu, and Xuanjing Huang. 2023. Secrets of RLHF in](https://doi.org/10.48550/ARXIV.2307.04964)
[Large Language Models Part I: PPO. CoRR, abs/2307.04964.](https://doi.org/10.48550/ARXIV.2307.04964)
Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng,
Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, and Graham Neubig. 2024. [WebArena:](https://openreview.net/forum?id=oKn9c6ytLx)
[A Realistic Web Environment for Building Autonomous Agents. In The Twelfth International](https://openreview.net/forum?id=oKn9c6ytLx)
_Conference on Learning Representations, ICLR 2024, Vienna, Austria. OpenReview.net._
Daniel M. Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B. Brown, Alec Radford, Dario Amodei, Paul F.
[Christiano, and Geoffrey Irving. 2019. Fine-tuning Language Models from Human Preferences.](http://arxiv.org/abs/1909.08593)
_CoRR, abs/1909.08593._
-----
A REVIEWING PPO
PPO, as used in RL tuning of LLMs, formulates language generation as token-level MDP (Section 3), where each response y is an episode. The state at time step t, st, is the concatenation
of the prompt and the tokens generated so far: st = x; y<t = [x0, . . ., x ∈SM 1, y0, . . ., yt 1]. The
_−_ _−_
action at corresponds to generating the next token yt from the model’s vocabulary. Given a prompt
**_x, an episode of this MDP starts from the initial state s0 = x, and with each action taken, the_**
environment moves to a subsequent state, st+1 = st; [at], by adding the action at to the existing
state st. In the language environment, because states are always formed by concatenating tokens,
the environment dynamics are fully known, and the transition function is deterministic, meaning
_P_ (st+1|st, at) = 1. Throughout the generation process, the reward rt is set to zero for all intermediate actions at, with the sequence-level reward (x; y) applied only at the final step when the
_R_
model stops the generation. That is:
(x; y) if t = T 1, where st+1 = y is terminal,
_rt = r(st, at) =_ _R_ _−_ (7)
0 otherwise.
A trajectory τ = (s0, a0, s1, a1, . . . ) thus represents a sequence of state-action pairs that begins at
the input prompt and continues until reaching the terminal state. Finally, the cumulative return of a
trajectory τ is defined as R(τ ) = _t=0_ _[r][t][ =][ r][T][ −][1][ =][ R][(][x][;][ y][)][.]_
The goal of RL tuning is to maximize the expected return of the model’s responses to prompts in
the dataset, as defined by the reward function[P][T][ −][1] _R (Equation 1). PPO, similar to other policy gradi-_
ent methods, achieves this goal by repeatedly sampling trajectories for a batch of prompt sampled
from D and taking multiple optimization steps in the direction of the gradient gppo to update the
parameters. PPO gradient gppo is defined as the gradient of the following loss:
_T −1_ _πθ(at_ _st)_
_Lppo(θ) = Eτ_ _∼πθk_ " _t=0_ min _πθk_ (at | _st)_ _[A]t[θ][k]_ _[,][ clip(][θ][)][A]t[θ][k]_ ! _−_ _β KL[πθ ∥_ _πref_ ]# (8)
X _|_
where πθk is the policy at the previous iteration, ϵ is the clipping parameter, β is the KL penalty
coefficient, A[θ]t _[k]_ = A[θ][k] (st, at) is the advantage estimate for policy πθ(at _st)_ _πθk_, and the clip(θ) function is:
clip(θ) = clip _|_ _._ (9)
_πθk_ (at _st)_ _[,][ 1][ −]_ _[ϵ,][ 1 +][ ϵ]_
_|_
Note that the KL penalty could be also added to the reward function R. We follow the more recent
implementations (Shao et al., 2024; Qwen, 2024), where it is added to the loss function. The KL
term can be computed using the following unbiased estimator (Schulman, 2020):
KL(ˆ _θ) =_ _[π][ref]_ [(][a][t][ |][ s][t][)] (10)
_πθ(at_ _st)_ _πθ(at_ _st)_
_|_ _[−]_ [log][ π][ref] [(][a][t] |[ |][ s][t][)] _[−]_ [1][,]
where πref denotes the reference model (initial SFT).
A.1 VALUE NETWORK
In addition to the policy πθ, PPO also trains a separate value network _V[ˆ]ϕ to obtain an estimate the_
true values V (st) of states st. Parameterized by ϕ, the value network is trained alongside the policy
network πθ using the following loss:
_T_ 1
1 _−_ 2 2[#]
ValNet(ϕ) = [1] max _V[ˆ]ϕ(st)_ _Gt_ _,_ clip(ϕ) _Gt_ (11)
_L_ 2 [E][τ] _[∼][π][θ]_ " _T_ _t=0_ _−_ _−_
X
where _V[ˆ]ϕk is the value network at the previous iteration, Gt =_ _t[′]=t_ _[γ][t][′][−][t][r][t][′][ is the empirical return]_
from state st, ϵ[′] is a value clipping parameter, and the clip(θ) is defined as:
clip(ϕ) = clip _Vˆϕ(st),_ _V[ˆ]ϕk_ (st) _ϵ[′],_ _V[ˆ][P]ϕk[T]([ −]st[1]) + ϵ[′][]_ _._ (12)
_−_
In RL-tuning of LLMs, the value network is typically initialized from the initial policy _πref (or the_
reward model, if available), replacing the language modeling head with a scalar output head to predict values (Zheng et al., 2023) This approach takes advantage of the base model’s prior knowledge
for value estimation.
-----
Prompt
If we express 3x[2] + x − 4 in the form a(x − _h)[2]_ + k, then
what is k?
Response
We complete the square.[†] First, we factor 3 out of the
terms 3x[2] + x to get 3 _x[2]_ + _[x]3_ .[†] We can square x + [1]6
to get x[2] + _[x]3_ [+] 361 [, so] [†]
_†_ 2 _†_
3 _x[2]_ + _[x]_ = 3 _x + [1]_
3 6 _−_ 36[1]
2 _†_
= 3 _x + [1]_
6 _−_ 36[3]
2
= 3 _x + [1]_
6 _−_ 12[1] _[,][†]_
and
2
3 _x[2]_ + _[x]_ 4[†] = 3 _x + [1]_
3 _−_ 6 _−_ 12[1]
_[−]_ [4][†]
2
= 3 _x + [1]_
6 _−_ [49]12 _[.]_
We see that k =[†] _−_ [49]12 _[.][†]_
Prompt
Suppose the roots of the polynomial x[2] _−_ _mx_ + _n are positive_
prime integers (not necessarily distinct). Given that m < 20,
how many possible values of n are there?
Response
Let p and q be the prime roots.[†] Then, we know that
_m = p + q and n = pq.[†]_ Since m < 20, the primes p
and q must both be less than 20.[†]
The primes less than 20 are 2, 3, 5, 7, 11, 13, 17, 19.[†]
Now we list all possible pairs (p, q) such that p + q < 20,[†]
remembering to also include the cases in which p = q:[†]
(2, 2), (2, 3), (2, 5), (2, 7), (2, 11), (2, 13), (2, 17)[†]
(3, 3), (3, 5), (3, 7), (3, 11), (3, 13)[†]
(5, 5), (5, 7), (5, 11), (5, 13)[†]
(7, 7), (7, 11)
There are 7 + 5 + 4 + 2 = 18 pairs in total.[†] Each pair
produces a value for n, and furthermore,[†] these values are
all distinct, because every positive integer has a unique prime
factorization.[†] Thus, there are 18 possible values for n.[†]
Figure B.1: Examples of solutions separated into its reasoning steps on the MATH dataset. Steps
are highlighted using distinct colors. _[†]_ denotes the reasoning step boundary.
**Advantage Estimation** Once the estimated values _V[ˆ]ϕ(st) are obtained, the advantages A(st, at)_
are computed using the GAE (Schulman et al., 2016):
_A(st, at)_ _A[GAE](st, at)_ (13)
_≈_ [ˆ]
= (1 _λ)_ _Aˆ[(1)]t_ + λA[ˆ][(2)]t + λ[2][ ˆ]A[(3)]t + . . . (14)
_−_
_∞_
= (γλ)[l]δt+l (15)
_l=0_
X
_∞_
= (γλ)[l][ ]rt+l + γV[ˆ]ϕ(st+l+1) − _V[ˆ]ϕ(st+l)_ (16)
_l=0_
X
where δt = rt + γV[ˆ]ϕ(st+1) − _V[ˆ]ϕ(st) is the temporal difference error, λ is the GAE parameter, and_
_γ is the discount factor. Also, we have:_
_k−1_
_γ[l]δt+l = rt + γrt+1 +_ + γ[k][−][1]rt+k 1 + γ[k][ ˆ]Vϕ(st+k) _Vϕ(st)._ (17)
_· · ·_ _−_ _−_ [ˆ]
_l=0_
X
_Aˆ[(]t[k][)]_ :=
Adjusting the GAE parameter λ allows for a trade-off between bias and variance in the advantage
estimates. However, as we discuss in Appendix C.6, we found that λ = 1 works best in our experiments (similar to the findings of Trung et al. (2024) and Ahmadian et al. (2024)). In this case, the
GAE simplifies to the following form (assuming γ = 1): _A[ˆ][GAE](st, at) =_ _t[′]=t_ _[r][t][′][ −]_ _V[ˆ]ϕ(st)._
[P][T][ −][1]
-----
Prompt
Tim rides his bike back and forth to work for each of his 5
workdays. His work is 20 miles away. He also goes for a
weekend bike ride of 200 miles. If he can bike at 25 mph how
much time does he spend biking a week?
Response
He bikes 20 x 2 = 40 miles each day for work.[†]
So he bikes 40 x 5 = 200 miles for work[†]
That means he bikes a total of 200+200 =400 miles for work[†]
So he bikes a total of 400 / 25=16 hours
#### 16[†]
Prompt
Tobias is buying a new pair of shoes that costs $95. He
has been saving up his money each month for the past three
months. He gets a $5 allowance a month. He also mows lawns
and shovels driveways. He charges $15 to mow a lawn and $7
to shovel. After buying the shoes, he has $15 in change. If he
mows 4 lawns, how many driveways did he shovel?
Response
He saved up $110 total because 95 + 15 = 110[†]
He saved $15 from his allowance because 3 x 5 = 15[†]
He earned $60 mowing lawns because 4 x 15 = 60[†]
He earned $35 shoveling driveways because 110-60-15 = 35[†]
He shoveled 5 driveways because 35 / 7 = 5.
#### 5[†]
Figure B.2: Examples of solutions separated into its reasoning steps on the GSM8K dataset. Steps
are highlighted using distinct colors. _[†]_ denotes the reasoning step boundary.
B REASONING STEP SEPARATION EXAMPLES
In this section, we outline the methodology used to segment solutions into discrete reasoning steps
for the MATH and GSM8K datasets, as illustrated in Figures B.1 and B.2.
For the MATH dataset, we begin by splitting solutions based on clear natural boundaries such as
newline characters or punctuation marks (e.g., periods or commas). Care is taken to avoid splitting
within mathematical expressions, ensuring that mathematical formulas remain intact. After this
initial segmentation, if any resulting step exceeds 100 characters, we further try to divide it by
identifying logical breakpoints, such as equal signs (=) within math mode.
For the GSM8K dataset, we take a simpler approach, segmenting the reasoning steps by newlines
alone as with this task newlines already serve as natural delimiters.
C EXPERIMENTAL DETAILS
C.1 DATASETS
We focus on mathematical reasoning datasets that require step-by-step solutions and are widely used
to evaluate the reasoning capabilities of LLMs. Below is a brief overview of the datasets used in our
experiments:
**MATH (Hendrycks et al., 2021)** The MATH dataset contains problems from high school math
competitions, covering a wide range of topics such as algebra, geometry, and probability. For our
experiments, we use the OpenAI split provided by Lightman et al. (2024), which consists of 500
problems for testing and 12,500 problems for training. We further divide the training set into 11,500
problems for training and 500 problems for validation. Each problem includes a step-by-step solution, ending in a final answer marked by \boxed{} in the solution (e.g., “..so the smallest possible
_value of c is π ”). This marking allows for verification of the correctness of model-generated_
responses by comparing the final answer to the ground truth. We use the scripts provided by
Lewkowycz et al. (2022), Lightman et al. (2024), and Shao et al. (2024) to extract and compare
the final answers to the ground truth.
**GSM8K (Cobbe et al., 2021)** The GSM8K dataset comprises high-quality grade-school math
problems, requiring basic arithmetic or elementary algebra to solve. Although simpler than the
MATH dataset, GSM8K is still widely used to assess the reasoning capabilities of LLMs. It contains
1,319 problems for testing and 7,473 for training. To create a validation set, we further split the
training set into 7,100 problems for training and 373 for validation. Verifying the correctness of
-----
Table 1: Summary of PPO hyperparamters used in the experiments.
Parameter Value
TRAINING
Optimizer AdamW
Adam Parameters (β1, β2) (0.9, 0.999)
Learning rate 1 × 10[−][6]
Weight Decay 0.0
Max Global Gradient Norm for Clipping 1.0
Learning Rate Scheduler Polynomial
Warm Up 3% of training steps
# Train Steps For MATH dataset 1000 steps (around 8 dataset epochs)
# Train Steps For GSM8K dataset 650 steps (around 8 dataset epochs)
GENERAL
Maximum Response Length 1024 tokens
Maximum Sequence Length for RhoMath 1.1B 2048 tokens
Maximum Sequence Length for DeepSeekMath 7B 2500 tokens
PPO
# Responses per Prompt 8 Search Space: {8, 16, 32}
# Episodes per PPO Step 512 Search Space: {256, 512}
# Prompts per PPO Step 512/8 = 64
Mini-batch Size 64
# Inner epochs per PPO Step 2 Search Space: {1, 2}
Sampling Temperature 0.6 Search Space: {0.6, 0.8, 1.0}
Discount Factor γ 1.0
GAE Parameter λ 1.0 Search Space: [0.95 − 1.0]
KL Penalty Coefficient β 1e-4 Search Space: {1e-1, 1e-2, 3e-3, 1e-4}
Policy Clipping Parameter ϵ 0.2
Value Clipping Parameter ϵ[′] 0.2
Table 2: Summary of RestEM hyperparamters used in the experiments.
Parameter Value
TRAINING
Optimizer AdamW
Adam Parameters (β1, β2) (0.9, 0.999)
Learning rate 1 × 10[−][6]
Weight Decay 0.0
Max Global Gradient Norm for Clipping 1.0
Learning Rate Scheduler Polynomial
Warm Up 3% of training steps
RESTEM
# iterations 10
# Sampled Responses per Prompt 8 Search Space: {8, 32}
Sampling Temperature 0.6 Search Space: {0.6, 0.8, 1.0}
Checkpoints every # iteration 500 step
Checkpoint Selection until validation improves
Search Space: {until validation improves, best validation}
model responses is straightforward, as the final answer is typically an integer, marked by #### in
the solution.
C.2 PPO IMPLEMENTATION
To ensure our PPO implementation is robust, and our evaluation reflects its full potential, we have
applied a set of well-established techniques and best practices from the literature (Huang et al., 2024;
-----
Table 3: Summary of DPO-Positive hyperparameters used in the experiments.
Parameter Value
TRAINING
Optimizer AdamW
Adam Parameters (β1, β2) (0.9, 0.999)
Learning rate 1 × 10[−][6]
Weight Decay 0.0
Max Global Gradient Norm for Clipping 1.0
Learning Rate Scheduler Polynomial
Warm Up 3% of training steps
DPO-POSITIVE
# DPO-β 0.1 for MATH, 0.3 for GSM8K
# DPO-Positive-λ 50.
# Epochs 3 Search Space: {3, 8}
# Sampled Responses per Prompt 64 Search Space: {8, 64}
# Pairs per prompt 64 Search Space: {8, 64}
Sampling Temperature 0.6
Ivison et al., 2024; Zheng et al., 2023). Below, we outline the key implementation details that were
most effective in our experiments:
- Advantage Normalization: After calculating the advantages, we normalize them to have
zero mean and unit variance, not only across the batch but also across data parallel ranks.
This normalization step is applied consistently in both our PPO and VinePPO implementations.
- Reward Normalization: We follow Ivison et al. (2024) and do not normalize the rewards,
as the reward structure in our task is already well-defined within the range of [0, 1]. Specifically, correct responses are assigned a reward of 1, while incorrect responses receive 0.
- End-of-Sequence (EOS) Trick: As detailed in Appendix A, rewards are only applied at
the final token of a response, which corresponds to the EOS token when the response is
complete. For responses that exceed the maximum length, we truncate the response to the
maximum length and apply the reward to the last token of the truncated sequence. We also
experimented with penalizing truncated responses by assigning a negative reward (-1), but
this did not lead to performance improvements.
- Dropout Disabling: During the RL tuning phase, we disable dropout across all models.
This ensures that the log probabilities remain consistent between different forward passes,
thereby avoiding stochastic effects that could hurt training stability.
- Fixed KL Coefficient We use a constant coefficient for the KL penalty. Although the
original PPO implementation for finetining language models (Ziegler et al., 2019) utilized
an adaptive KL controller, more recent implementations typically do not use this approach
(Ouyang et al., 2022; Touvron et al., 2023; Xu et al., 2024).
C.3 SFT MODELS
To ensure a systematic and reproducible evaluation, we create our SFT models πref by finetuning the
_base pretrained LLMs (as opposed to their “Instruct” version) on the training splits of the respective_
datasets. Specifically, we produce four distinct SFT models: two base LLM (DeepSeekMath 7B and
RhoMath 1.1B ) across two datasets (MATH and GSM8K). The base models are finetuned using
the Adam optimizer without weight decay. We employ a learning rate warm-up over 6% of the total
training steps. Each model is trained for three epochs with a batch size of 64, and the best checkpoint
is selected based on validation accuracy. For each SFT model, we conduct a hyperparameter sweep
over learning rates in the range {1 × 10[−][7], 3 × 10[−][7], 1 × 10[−][6], 3 × 10[−][6], 1 × 10[−][5], 3 × 10[−][5], 8 ×
10[−][5], 1 × 10[−][4]} to ensure optimal performance. We then use these SFT models as the initial
checkpoint for training the methods mentioned in our paper.
-----
C.4 EVALUATION
We evaluate each method’s performance on the test sets of each dataset. For example, when we
report that PPO achieves 42.8% accuracy on the MATH dataset for the DeepSeekMath 7B model,
this means the PPO training was initialized with the SFT model specific to DeepSeekMath 7B on the
MATH dataset, and accuracy was measured on the MATH test set. Our primary evaluation metric is
accuracy, specifically Pass@1, which reflects the percentage of correctly answered problems on the
first attempt. This metric is crucial because it represents a realistic user interaction, where the model
is expected to deliver a correct answer without the need for multiple tries. For each evaluation, we
sample a response from the model for a given prompt, using a maximum token length of 1024 and
a temperature of 0.35. A response is considered correct if its final answer matches the ground truth
final answer, as detailed in Appendix C.1. Furthermore, each accuracy score is averaged over 16
evaluation rounds, each conducted with different random seeds. This will ensure a robust and low
variance assessment of model performance.
C.5 BASELINES
**DPO[+]** **(DPO-Positive) (Pal et al., 2024) The original DPO method has a failure mode when the edit**
distance between positive (correct) and negative (incorrect) responses is small. In these cases, the
probability of both responses tends to decrease. This issue is especially common in reasoning and
mathematical tasks, where multiple solution paths may involve similar equations or steps. Although
DPO achieves its goal by reducing the probability of the incorrect response more than the correct
one, it ultimately still lowers the likelihood of generating the correct response. This undermines
model performance, making it a failure mode despite partially fulfilling the DPO objective. (Pal
et al., 2024; Hwang et al., 2024). While previous methods mitigated this issue by maintaining a high
edit distance between positive and negative response pairs, DPO-Positive (Pal et al., 2024) addresses
it more effectively. It introduces an additional term to the DPO objective, penalizing any reduction in
the probability of the correct response below its probability under the reference model. This ensures
that the correct response is not overly suppressed, even when the edit distance is small. The final
objective of DPO-Positive is::
DPO-Positive(πθ; πref) = E(x,yw,yl) log σ _β_ log _[π][θ][(][y][w][|][x][)]_
_L_ _−_ _∼D"_ _πref(yw_ _x)_ _πref(yl_ _x)_ !
_|_ _[−]_ [log][ π][θ][(][y][l][|]|[x][)]
DPO Original term
| {z }
_λ_ max 0, log _[π][ref][(][y][w][|][x][)]_ (18)
_−_ _·_ _πθ(yw|x)_ !#
DPO-Positive additional term
where λ is a hyperparameter controlling the weight of the additional term keeping the probabilities
| {z }
of correct responses high. We chose DPO-Positive as a baseline due to its strong performance in
(Setlur et al., 2024).
**RestEM (Singh et al., 2024) RestEM is an iterative method where, in each iteration, the base model**
is trained on correct, self-generated responses from the chosen checkpoint of the previous iteration.
RestEM takes gradient steps to maximize this objective until the fine-tuned model’s accuracy drops
on a validation split. The objective of the fine-tuning process is to maximize the log-likelihood of
correct responses. Training the model with a maximum likelihood objective on correct responses is
mathematically equivalent to training the model with REINFORCE (Sutton et al., 1999), without a
baseline, where the entire response is treated as a single action. The reward is 1 when the response
is correct, and 0 otherwise. Specifically, we have:
Ex _,y_ _π(_ **_x),_** (x;y)=1 [ _θ log Pθ(y_ **_x)]_** = Ex _,y_ _π(_ **_x) [_** _θ log Pθ(y_ **_x)_** (x; y)] (19)
_∼D_ _∼_ _·|_ _R_ _∇_ _|_ _∼D_ _∼_ _·|_ _∇_ _|_ _R_
max log-likelihood on correct responses REINFORCE
Therefore, maximizing log-likelihood training on correct responses is equivalent to train with policy
| {z } | {z }
gradient without precise credit assignment, such as without advantages for specific actions. In our
experiments, we observe the impact of this limitation in both Figure C.3 and Figure C.4 where
RestEM overfits on the training data.
-----
RhoMath 1.1B - GSM8K - RestEM
train test
0.43
0.42
0.41
0.8
0.7
0.6
0.6
0.5
0.4
0.3
10000 20000 30000 40000
10000 20000 30000 40000
Global Step
RhoMath 1.1B - MATH - RestEM
train test
0.18
0.17
0.16
0.2 0 10000 20000 30000 40000
10000 20000 30000 40000
Global Step
DeepSeekMath 7B - GSM8K - RestEM
train test
0.73
0.72
0.71
0.70
0.90
0.85
0.80
|Col1|DeepSeekMath train|
|---|---|
|0||
|||
|||
|5||
|||
|0||
|||
0.6
0.5
20000 40000
20000 40000
|Col1|Col2|
|---|---|
|||
|||
|||
|||
|||
|||
Global Step
DeepSeekMath 7B - MATH - RestEM
train test
0.37
0.36
0.35
0.34
0.33
0.4
0 20000 40000
20000 40000
Global Step
Figure C.3: Performance comparisons across different models and datasets: (a) RhoMath 1.1B on
GSM8K, (b) RhoMath 1.1B on MATH, (c) DeepSeekMath 7B on GSM8K, and (d) DeepSeekMath
7B on MATH. The yellow points are chosen checkpoints based on the RestEM rule. Within each
iteration, we train on the generated data of the chosen checkpoint for eight epochs and then we
choose the first place where performance on a validation split drops following Singh et al. (2024)
C.6 HYPERPARAMETERS
In this section, we present a comprehensive overview of the hyperparameters used in our experiments. It’s important to note that the number of training samples was carefully selected to ensure
that the amount of training data remained consistent across all methods.
**PPO** Finetuning LLMs using PPO is known to be sensitive to hyperparameter selection, and finding the optimal settings is critical for achieving strong performance. To ensure the robustness of our
study, we explored hyperparameter values reported in recent studies (Shao et al., 2024; Zheng et al.,
2023; Ivison et al., 2024; Huang et al., 2024) and conducted various sweeps across a wide range of
values to identify the best configuration for our tasks and models. The full set of hyperparameters,
along with their respective search spaces, is detailed in Table 1.
**VinePPO** We utilized the same hyperparameter setup as in the PPO implementation (Table 1) for
VinePPO. As outlined in Section 5, the number of MC samples, K, was set to 9 for all experiments.
-----
RhoMath 1.1B - GSM8K RhoMath 1.1B - MATH
0.55 0.21
0.20
0.50 0.19
0.18
0.45 0.17
0.16
0.40 0.15
0.6 0.7 0.8 0.9 0.2 0.3 0.4 0.5 0.6
DeepSeekMath 7B - GSM8K DeepSeekMath 7B - MATH
Test Accuracy 0.81
0.78 0.44
0.75 0.40
0.72 0.36
0.69 0.32
0.8 0.9 0.4 0.5 0.6 0.7
Train Accuracy
Method VinePPO PPO DPO [+] RestEM
Figure C.4: A scatter plot showing the relationship between achieved training accuracy and test accuracy at various checkpoints throughout training. This plot highlights the dynamics of overfitting
and generalization across different methods. As we progress from no credit assignment to accurate
credit assignment—from RestEM to DPO[+], PPO, and finally VinePPO—generalization improves
and overfitting decreases. In other words, by treating the training dataset as a resource, VinePPO
achieves higher test accuracy per unit of training data consumed. Note that all these are fully trained.
Note that the training accuracy does not reach 100 percent due to several factors, including mechanisms like the KL penalty in DPO[+], PPO, and VinePPO, the reset to the base model in RestEM, or
the absence of any correct self-generated responses for certain questions.
**RestEM** To ensure fair comparison we equalize the number of sampled responses for training
between our RestEM run and our PPO runs. Therefore, in each RestEM iteration we sample 8
responses per prompt and train for 8 epochs on the correct responses. To enhance RestEM’s performance, we also conducted a sweep of other reasonable parameters(Table 2). This included increasing the number of samples to expand the training data and reducing the number of correct responses
per question to minimize overfitting.However, we observed no significant improvement .
**DPO[+]** **(DPO-Positive)** We adopted the same hyperparameters as those used by Setlur et al.
(2024). In addition, we conducted a search for the optimal value of β to see if using the same
_β as in our PPO experiments would yield better performance than the values they recommended._
To maintain a fair comparison, we ensured that the number of training samples in our DPO[+] runs
matched those in our PPO run where we trained for eight epochs, with each epoch consisting of
training on eight responses per question. To match this, we generated 64 pairs of positive and negative responses given 64 self-generated responses from the base model. (Table 3)
C.7 TRAIN VS. TEST DURING TRAINING
When training on reasoning datasets, the training data can be viewed as a finite resource of learning signals. Algorithms that exhaust this resource through memorization tend to generalize less
effectively on the test set. As we move from RL-free methods or less accurate credit assignment
towards more accurate credit assignment, or full reinforcement learning—from RestEM to DPO,
PPO, and finally VinePPO—the model demonstrates higher test accuracy gains per unit of training
data consumed. This trend is illustrated in Figure C.4.
-----
Table 4: Average time spent per each training step for different methods and models measured for
MATH dataset
Method Model Hardware Average Training Step Time (s)
PPO RhoMath 1.1B 4 × Nvidia A100 80GB 80
VinePPO RhoMath 1.1B 4 × Nvidia A100 80GB 380
PPO DeepSeekMath 7B 8 × Nvidia H100 80GB 312
VinePPO DeepSeekMath 7B 8 × Nvidia H100 80GB 583
C.8 COMPUTE
All experiments were conducted using multi-GPU training to efficiently handle the computational
demands of large-scale models. For the RhoMath 1.1B model, we utilized a node with 4 × Nvidia
A100 80GB GPUs to train both PPO and VinePPO. For the larger DeepSeekMath 7B model, we
employed a more powerful setup, using a node with 8 × Nvidia H100 80GB GPUs. Additionally, for training DeepSeekMath 7B models with the RestEM approach, we used a node with 4 ×
Nvidia A100 80GB GPUs. The average training step time for each method on the MATH dataset is
presented in Table 4.
C.9 SOFTWARE STACK
Both PPO and VinePPO require a robust and efficient implementation. For model implementation,
we utilize the Huggingface library. Training is carried out using the DeepSpeed distributed training
library, which offers efficient multi-GPU support. Specifically, we employ DeepSpeed ZeRO stage 0
(vanilla data parallelism) for RhoMath 1.1B and ZeRO stage 2 (shared optimizer states and gradients
across GPUs) for DeepSeekMath 7B . For trajectory sampling during RL training, we rely on the
vLLM library (Kwon et al., 2023), which provides optimized inference for LLMs. Additionally,
VinePPO leverages vLLM to generate Monte Carlo samples for value estimation. This software
stack ensures that our experiments are both efficient and reproducible. For instance, during VinePPO
training, we achieve an inference speed of up to 30K tokens per second using 8 × Nvidia H100 GPUs
with the DeepSeekMath 7B model.
C.10 REPRODUCIBILITY
In this study, all experiments were conducted using open-source libraries, publicly available datasets,
and open-weight LLMs. To ensure full reproducibility, we will release both Singularity and Docker
containers, equipped with all dependencies and libraries, enabling our experiments to be run on
any machine equipped with NVIDIA GPUs, now or in the future. Additionally, we will make our
[codebase publicly available on GitHub at https://github.com/McGill-NLP/VinePPO.](https://github.com/McGill-NLP/VinePPO)
D FULL RESULTS
D.1 TRAINING PLOTS
In this section, we present additional training plots for both PPO and VinePPO on the GSM8K
dataset, as shown in Figure D.5. Figure D.6 further illustrates the trade-off between accuracy and
KL divergence, while Figure D.7 highlights the computational efficiency of the models[8].
We observe consistent patterns with the results reported in Section 6. Although the performance
gap for the DeepSeekMath 7B model is narrower on GSM8K, VinePPO still higher accuracy with
significantly lower KL divergence and faster per-iteration time (this happens because responses to
GSM8K problems are typically shorter, making MC estimation quite fast).
8For GSM8K runs of RhoMath 1.1B, different hardware was used, making direct comparison of wall-clock
time not feasible.
-----
RhoMath 1.1B DeepSeekMath 7B
80%
) 52%
Method
48% 76% VinePPO
44% PPO
Accuracy ( 72%
40%
0 200 400 600 0 200 400 600
Training Step
Figure D.5: Comparison of the training behavior between VinePPO and PPO. VinePPO demonstrates consistently higher accuracy throughout the training on the GSM8K dataset. Refer to Figure 5 for MATH dataset.
RhoMath 1.1B DeepSeekMath 7B
55%
) 80%
50% Method
VinePPO
75%
45%
PPO
Accuracy ( 40% 70%
35%
0 10 20 0 5 10 15 20
KL[ ref[]]
Figure D.6: Task accuracy as a function of KL divergence during training on the GSM8K dataset.
VinePPO significantly higher accuracy per KL. Refer to Figure 6 for MATH dataset.
D.2 VALUE PREDICTION ANALYSIS
In this section, we provide additional plots for value analysis. Specifically, Figures D.8 to D.11
demonstrates these plots for on the MATH dataset, and Figures D.12 to D.15 on the GSM8K dataset.
Furthermore, we present the prediction error per step in Figures D.16 to D.19.
-----
DeepSeekMath 7B
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|
|---|---|---|---|---|---|---|
|||||2|.2x Faste|r|
||||||||
||||||||
||||||||
||||||||
||||||||
||||||||
10 20 30 40
80%
76%
Method
VinePPO
PPO
72%
68%
Wall Clock (Hours)
Figure D.7: Accuracy vs. Wall Clock Time for both methods measured on the same hardware
throughout the entire training. Since the responses to GSM8K problems are short, VinePPO is even
faster per-iteration in our setup and it reaches PPO’s peak performance in fewer iterations and less
overall time.
Figure D.8: Distribution of predicted values for each state vs. ground truth (computed using 256
MC samples) during training. MAE denotes the Mean Absolute Error (MAE).
-----
Figure D.9: Distribution of predicted values for each state vs. ground truth (computed using 256
MC samples) during training. MAE denotes the Mean Absolute Error (MAE).
Figure D.10: Distribution of predicted values for each state vs. ground truth (computed using 256
MC samples) during training. MAE denotes the Mean Absolute Error (MAE).
-----
Figure D.11: Distribution of predicted values for each state vs. ground truth (computed using 256
MC samples) during training. MAE denotes the Mean Absolute Error (MAE).
Figure D.12: Distribution of predicted values for each state vs. ground truth (computed using 256
MC samples) during training. MAE denotes the Mean Absolute Error (MAE).
-----
Figure D.13: Distribution of predicted values for each state vs. ground truth (computed using 256
MC samples) during training. MAE denotes the Mean Absolute Error (MAE).
Figure D.14: Distribution of predicted values for each state vs. ground truth (computed using 256
MC samples) during training. MAE denotes the Mean Absolute Error (MAE).
-----
Figure D.15: Distribution of predicted values for each state vs. ground truth (computed using 256
MC samples) during training. MAE denotes the Mean Absolute Error (MAE).
PPO @ Step 60 PPO @ Step 120 PPO @ Step 360 PPO @ Step 420 PPO @ Step 840 PPO @ Step 960
0.16
0.30 0.25 0.20 0.20 0.14 0.10
0.09
0.25 0.20 0.16 0.16 0.12 0.08
0.20 0.15 0.12 0.12 0.10 0.070.06
0.15 0.08
MAE VinePPO @ Step 60 VinePPO @ Step 120 VinePPO @ Step 360 VinePPO @ Step 420 VinePPO @ Step 840 VinePPO @ Step 960
0.08 0.08 0.06 0.05 0.03
0.06 0.06 0.06 0.04 0.040.03 0.02
0.04 0.04 0.04 0.02
0.02 0.02 0.02 0.02 0.01 0.01
0.00 0.00 0.00 0.00 0.00 0.00
0.0 0.5 1.0 0.0 0.5 1.0 0.0 0.5 1.0 0.0 0.5 1.0 0.0 0.5 1.0 0.0 0.5 1.0
Reasoning Progress
Figure D.16: Visualizing the Mean Absolute Error (MAE) of the value predictions in different point
of reasoning chain, plotted for DeepSeekMath 7B on MATH dataset.
PPO @ Step 60 PPO @ Step 180 PPO @ Step 240 PPO @ Step 300 PPO @ Step 480 PPO @ Step 600
0.30 0.25 0.18 0.10 0.09
0.25 0.20 0.15 0.180.15 0.08 0.080.07
0.20 0.12 0.06
0.15 0.15 0.09 0.12 0.06 0.05
0.10 0.10 0.04
MAE VinePPO @ Step 60 VinePPO @ Step 180 VinePPO @ Step 240 VinePPO @ Step 300 VinePPO @ Step 480 VinePPO @ Step 600
0.05 0.03
0.04 0.04 0.02 0.01
0.04 0.03 0.03 0.02 0.01
0.02 0.02 0.02 0.01 0.01 0.01
0.01 0.01 0.01 0.01
0.00 0.00 0.00 0.00 0.00 0.00
0.0 0.5 1.0 0.0 0.5 1.0 0.0 0.5 1.0 0.0 0.5 1.0 0.0 0.5 1.0 0.0 0.5 1.0
Reasoning Progress
Figure D.17: Visualizing the Mean Absolute Error (MAE) of the value predictions in different point
of reasoning chain, plotted for DeepSeekMath 7B on GSM8K dataset.
-----
PPO @ Step 40 PPO @ Step 120 PPO @ Step 320 PPO @ Step 520 PPO @ Step 800 PPO @ Step 880
0.20 0.20 0.20 0.20
0.20 0.18 0.18
0.15 0.15 0.15 0.15 0.15 0.16
0.10 0.12 0.10 0.12 0.10 0.12
0.10 0.05 0.09 0.05 0.08
MAE VinePPO @ Step 40 VinePPO @ Step 120 VinePPO @ Step 320 VinePPO @ Step 520 VinePPO @ Step 840 VinePPO @ Step 920
0.07
0.06 0.05 0.06 0.06 0.06 0.04
0.04 0.04 0.050.04 0.050.04 0.04 0.030.02
0.02 0.03 0.03 0.03 0.02 0.01
0.02
0.02 0.02 0.01 0.00 0.00
0.0 0.5 1.0 0.0 0.5 1.0 0.0 0.5 1.0 0.0 0.5 1.0 0.0 0.5 1.0 0.0 0.5 1.0
Reasoning Progress
Figure D.18: Visualizing the Mean Absolute Error (MAE) of the value predictions in different point
of reasoning chain, plotted for RhoMath 1.1B on MATH dataset.
PPO @ Step 40 PPO @ Step 80 PPO @ Step 160 PPO @ Step 280 PPO @ Step 360 PPO @ Step 480
0.23 0.18 0.25
0.25
0.30 0.30 0.20
0.16 0.20
0.20 0.17
0.25
0.20 0.15 0.15 0.14 0.15
0.20
0.12
MAE VinePPO @ Step 40 VinePPO @ Step 80 VinePPO @ Step 160 VinePPO @ Step 280 VinePPO @ Step 360 VinePPO @ Step 480
0.10 0.08 0.08 0.08 0.06 0.06
0.08 0.06
0.05 0.05 0.04 0.04
0.05
0.03 0.03 0.03 0.04 0.02 0.02
0.00 0.00 0.00 0.02 0.00 0.00
0.0 0.5 1.0 0.0 0.5 1.0 0.0 0.5 1.0 0.0 0.5 1.0 0.0 0.5 1.0 0.0 0.5 1.0
Reasoning Progress
Figure D.19: Visualizing the Mean Absolute Error (MAE) of the value predictions in different point
of reasoning chain, plotted for RhoMath 1.1B on GSM8K dataset.
-----
| [
"Alessandro, Sordoni",
"Amirhossein, Kazemnejad",
"Milad, Aghajohari",
"Siva, Reddy",
"Eva, Portelance",
"Aaron, Courville",
"Nicolas Le, Roux"
] | 2024-10-02T00:00:00 | ICLR 2025 Submission | false | 0 | 0 | null | http://arxiv.org/abs/2410.01679 | https://arxiv.org/abs/2410.01679 | https://www.semanticscholar.org/paper/93e2bf97b2664443b108e03950829cc79f3f8419 |
VisAidMath: Benchmarking Visual-Aided Mathematical Reasoning | Although previous research on large language models (LLMs) and large multi-modal models (LMMs) has systematically explored mathematical problem-solving (MPS) within visual contexts, the analysis of how these models process visual information during problem-solving remains insufficient. To address this gap, we present VisAidMath, a benchmark for evaluating the MPS process related to visual information. We follow a rigorous data curation pipeline involving both automated processes and manual annotations to ensure data quality and reliability. Consequently, this benchmark includes 1,200 challenging problems from various mathematical branches, vision-aid formulations, and difficulty levels, collected from diverse sources such as textbooks, examination papers, and Olympiad problems. Based on the proposed benchmark, we conduct comprehensive evaluations on ten mainstream LLMs and LMMs, highlighting deficiencies in the visual-aided reasoning process. For example, GPT-4V only achieves 45.33% accuracy in the visual-aided reasoning task, even with a drop of 2 points when provided with golden visual aids. In-depth analysis reveals that the main cause of deficiencies lies in hallucination regarding the implicit visual reasoning process, shedding light on future research directions in the visual-aided MPS process. | In-depth analysis reveals that the main cause of deficiencies lies in hallucination regarding the implicit visual reasoning process, shedding light on future research directions in the visual-aided MPS process. | ## VisAidMath: Benchmarking Visual-Aided Mathematical Reasoning
**Jingkun Ma** **[1]** **Runzhe Zhan** **[1]** **Derek F. Wong[1][˚]**
**Yang Li[1]** **Di Sun[2]** **Hou Pong Chan** **[3]** **Lidia S. Chao** **[1]**
1NLP2CT Lab, Department of Computer and Information Science, University of Macau
2 Department of Mathematics, University of Macau
3 DAMO Academy, Alibaba Group
```
nlp2ct.{jingkun, runzhe}@gmail.com, [email protected]
[email protected], [email protected]
[email protected], [email protected]
#### Abstract
```
Although previous research on large language models (LLMs) and large multimodal models (LMMs) has systematically explored mathematical problem-solving
(MPS) within visual contexts, the analysis of how these models process visual
information during problem-solving remains insufficient. To address this gap,
we present VisAidMath, a benchmark for evaluating the MPS process related to
visual information. We follow a rigorous data curation pipeline involving both
automated processes and manual annotations to ensure data quality and reliability.
Consequently, this benchmark includes 1,200 challenging problems from various
mathematical branches, vision-aid formulations, and difficulty levels, collected
from diverse sources such as textbooks, examination papers, and Olympiad problems. Based on the proposed benchmark, we conduct comprehensive evaluations
on ten mainstream LLMs and LMMs, highlighting deficiencies in the visual-aided
reasoning process. For example, GPT-4V only achieves 45.33% accuracy in the
visual-aided reasoning task, even with a drop of 2 points when provided with
golden visual aids. In-depth analysis reveals that the main cause of deficiencies
lies in hallucination regarding the implicit visual reasoning process, shedding light
on future research directions in the visual-aided MPS process.
#### 1 Introduction
The formulation of mathematical problems contains
numerous elements hidden in the textual dimension.
For instance, the transformation between visual and
text modality effectively enlarge the decision space
of large language models (LLMs), thereby increasing the complexity of mathematical problem-solving
(MPS).
Although general-purpose mathematical reasoning
remains a weakness of LLMs or large multi-modal
models (LMMs) [1, 2, 3, 4], multi-modal MPS is
˚ Corresponding author
Figure 1: Comparison between VisAidMath
and other benchmarks. Our work particularly
focus on utilization of explicit and implicit
visual context during reasoning process.
Under-review
-----
(a) Mathematical Branch (b) Visual Aid
Figure 2: Accuracies of all LMM on visual-aided mathematical reasoning task across four mathematical branches and six visual aids. The best performing GPT-4V cannot exceed 60% accuracy score.
a more challenging scenario [5, 6]. Some research
[6, 7] have investigated multi-modal MPS by introducing visual context, but these benchmarks are
still limited to traditional vision-language task paradigm, specifically, they mainly focus on evaluating
text-only reasoning steps to solve the problems. The multi-modal capabilities of MPS extend beyond
comprehending input modalities. It encompasses the utilization and inference of information from
other modalities during the reasoning process, including spatial imagination and spatial reasoning.
However, these cross-modality evaluation aspects are rarely taken into account in the evaluation or
reference answers of existing benchmarks. Thus, there remains a significant gap between text-only
reasoning and cross-modality reasoning, setting up a research space that this work aims to explore, as
shown in Figure 1.
To bridge this gap, we focus on visual-aided mathematical reasoning process and construct the
**VisAidMath benchmark. This benchmark consists of 1200 formatted mathematical questions**
incorporating both explicit and implicit visual contexts. The answers to these questions not only
demonstrate the utilization of visual information but also reflect the process of deduction through
visual aids. For instance, the addition of auxiliary lines as visual aids within the input image is
crucial for solving solid geometry questions. To construct this benchmark, we selected 12 data
sources from various educational contexts, including middle school, high school, university, and
mathematical competitions across multiple countries. These sources were categorized into four
mathematical branches: “plane geometry”, “solid geometry”, “analytic geometry”, and “calculus and
functions” based on educational taxonomy. In cases where the questions were in languages other than
English, we utilized machine translation systems to translate them into English text. Subsequently,
professionals were engaged to annotate the questions and answers in a standardized L[A]TEX format.
To ensure the quality of the benchmark, we implemented an additional quality control procedure to
remove any low-quality samples, resulting in a final set of 1200 question-answer pairs. To the best of
our knowledge, we are the pioneers in introducing and benchmarking the problem-solving steps that
involve the utilization of input visual contexts.
To demonstrate the challenges of the VisAidMath benchmark, we designed two types of reasoning
tasks that evaluate various aspects of capabilities, including: 1) direct visual-aided reasoning, and
2) indirect reasoning with provided visual aids. Each type contains various sub-tasks based on
the input modality. We conducted experiments using ten mainstream LLMs and LMMs, and our
results indicate that all of the aforementioned tasks pose challenges for most models. Even the
state-of-the-art (SOTA) models, such as GPT-4 and GPT-4V, only achieved an average accuracy of
approximately 50% across all tasks, as shown in Figure 2. While GPT-4 and GPT-4V outperformed
other models on each task, the visual aids they generated exhibited only around 5% n-gram similarity
-----
compared to the golden visual aids provided by human professionals. To uncover the underlying
reasons behind the failure of SOTA models in visual-aided reasoning tasks, we conducted a further
analysis of their reasoning steps and annotated specific error sources. We observed a distribution
of different visual-aided reasoning errors and their impact on correctness. Notably, we found that
hallucinations had a critical impact on the final answer. These findings provide valuable insights for
future development of mathematical LLMs and LMMs.
#### 2 VisAidMath
**2.1** **Data Creation**
**Principles** A typical problem within our VisAidMath benchmark comprises four parts: Visual
**Context (C), Question (Q), Visual Aids (V), and Answer (A). The main task involves prompting**
the model to generate visual aids that assist in mathematical reasoning. While the visual context may
be optional, spatial descriptions are included as essential data elements within the question. Given
that many text-based LLMs lack image understanding or generation capabilities, we have additionally
annoatated precise captions for both the visual context and the visual aids through annotation. This
allows us to extend the evaluation scenarios to models that are constructed with limited modality.
By providing these detailed captions, we are able to accommodate models with varying modality
capabilities.
**Data Sources** In accordance with the aforementioned principles, the VisAidMath benchmark has
been manually collected and annotated using a diverse and balanced set of data sources. Through an
extensive search and careful examination on a case-by-case basis, we discovered that the Chinese
community offers a larger pool of mathematical problems with visual aids across various complexity
levels and mathematical branches compared to other communities. As a result, we primarily collected
data from Chinese sources and subsequently performed machine translation. To ensure a range of
difficulty levels, we categorized the data samples based on their sources into the following categories:
1) Easy: e.g., High School Entrance Examination, 2) Medium: e.g., College Entrance Examination,
and 3) High: e.g., Mathematical Olympiad. Additionally, metadata has been included for further
in-depth analysis, which will be discussed in Appendix B.1.
**Mathematical Categories** To ensure diversity and balance, we manually collected and annotated
a range of categories within the benchmark. The questions primarily belong to four mathematical
branches: Plane Geometry, Solid Geometry, Analytical Geometry, and Calculus and Function.
For visual aids, we introduce the following solution elements: Auxiliary Line, Plane Geometry
_Graph, Solid Geometry Graph, Function Graph, Plane Coordinate System, and Three-Dimensional_
_Coordinate System. Examples from different categorizations can be found in Appendix B.3._
**Construction Pipeline** As depicted in Figure 8, We propose a construction pipeline for the VisAidMath dataset, which incorporates multi-round verification and dynamic quality control based on
feedback. The dataset creation pipeline involves four key roles:
- Administrator: This role assigns daily collection tasks based on the progress and previous
annotation feedback.
- Collector: The collector searches for data that satisfies the assigned collection tasks. The
collected data should be in PDF format and later transformed into L[A]TEX files using OCR.
- Annotator: The annotator first validates and refines the L[A]TEX files by comparing the original
PDF files provided by the collector with the transformed L[A]TEX files. Then, the annotator
performs interactive labeling using our designed tool. To ensure a balanced distribution
across different categories, the annotator regularly analyzes the data distribution and provides
feedback on the current progress and any annotation issues to the collector and administrator.
- Verifier: The verifier is responsible for validating the categorization and data quality. If
labels are not appropriate, they adjust the annotated captions of the context and visual aids.
-----
To prepare the benchmark for wide English research community, we perform machine translation,
data processing, and release preparation. The detailed processes can be found in Appendix D.1, D.2,
and D.3.
**Annotation Details** The quality of annotation plays a crucial role in ensuring the reliability of the
benchmark, and the annotation of metadata significantly impacts the depth and breadth of analysis.
In the annotation process, we instructed the annotators to label various metadata, including the math
_category, source, visual aids’ type, question form, choices, answer form, answer, and language._
Given that the formalization of mathematical problems can vary significantly across different data
sources, we specifically asked the annotators to manually extract the question and answer text from
the L[A]TEX file to ensure completeness and accuracy. Following the approach used in MathVista
[7], we transformed free-form questions without a purely numerical answer into multiple-choice
or True/False questions. This deterministic transformation allows for a more robust evaluation. To
create precise and comprehensive descriptions of the visual context and visual aids, we provided the
annotators with caption writing templates that were designed to simplify the complexity of caption
writing. For more details, please refer to Appendix C.1. Four Chinese undergraduate students who
majored in STEM during their high school period were carefully selected to form our annotation
team. The collectors and annotators were compensated salary aligns with average price in local data
crowdsourcing market for each data sample they collected or annotated. Furthermore, a graduate
student specializing in mathematics was appointed as the verifier, offering professional annotation
guidance, validation, and refinement throughout the process.
**2.2** **Benchmark Attributes**
The distribution of data sources is presented in Figure 3,
providing a comprehensive overview of the dataset’s origins. Additionally, the mathematical branches within the
dataset exhibit a well-balanced distribution, as depicted
in Appendix B.3.1. This balance enables a broader exploration of diverse mathematical knowledge. It is worth
noting that the distribution of visual aids labels is slightly
imbalanced. This occurs because mathematics problems
often involve overlapping types, and basic elements such
as auxiliary lines are frequently incorporated as necessary
components within more complex visual aids. Further
details on the distribution of visual aid types can be found
in Appendix B.3.2. In terms of question types, VisAidMath encompasses three main formats: Multiple Choice,
True/False, and Free-form. The answer formats encompass integers, decimals, fractions, and choices, ensuring
unambiguous evaluation criteria.
**2.3** **Task Definition**
Figure 3: Distribution of data sources
and difficulty levels.
VisAidMath introduces the concept of visual aids as a
crucial component in bridging the gap between text-centric
reasoning and visual-aided reasoning. As a result, a series of novel tasks are proposed that involve
generating or leveraging visual aids alongside mathematical reasoning to achieve the correct answers.
These tasks mainly are categorized into two formulations:
**Task 1: Vanilla Visual-Aided Reasoning** In this task, the model is required to understand the
visual context (C) and the question (Q). It then generates visual aids (V) and deduces the answer (A)
with the assistance of the generated visual aids. We denote this reasoning process as CQ2VA (Visual
Context + Question Ñ Visual Aids + Answer).
-----
|Col1|Y|Col3|
|---|---|---|
||||
||||
||||
|||X|
|Col1|Y|Col3|
|---|---|---|
||Y||
||||
||||
|||X|
Figure 4: Comparison of different tasks: a) General Reasoning: provide MPS reasoning steps directly.
b) Visual-Aided Reasoning: create visual aids that disclose implicit visual context within problem,
incorporating with textual reasoning to solve mathematical problem. c) Reasoning with Provided
Visual Aids: solve the mathematical problem based on given visual aids. Visual-Aided Reasoning
require the model to perform visual reasoning for visual aids generation.
**Task 2: Reasoning with Provided Visual Aids** In this task, the model is provided with visual
aids (V) along with the visual context (C) and the question (Q). The model uses this information to
deduce the final answer (A). We denote this reasoning process as CQV2A (Visual Context + Question
+ Visual Aids Ñ Answer).
We illustrate the distinctions between these tasks and general visual reasoning in Figure 4. To
accommodate language models that only accept textual inputs, we transcribe the visual input into
textual descriptions as a substitute for visual information. For all transcribed or provided information,
we prefix the corresponding notation with “p”. For example, “pV” denotes the visual aids provided to
the model, and “pC” represents the description of the input image.
#### 3 Experiments
**3.1** **Models**
We conducted experiments on two types of tasks mentioned above, considering various input modalities. We explored the visual-aided reasoning capabilities of both open-source and closed-source
LLMs and LMMs. The models evaluated on the VisAidMath benchmark include: 1) Open-source
LLMs: Llama-2-7B [8], Mistral-7B-Instruct-v0.2 [9]; 2) Closed-source LLMs: GPT-3.5-turbo [10],
GPT-4-turbo [11]; 3) Open-source LMMs: LLaVA-Next-Mistral-7B [12], InternLM-XComposer2VL [13]; 4) Closed-source LMMs: Qwen-VL-Plus [14], Gemini-Pro-Vision [15], Claude-3-Sonnet
[16], GPT-4-Vision [17]. For all open-source models, a temperature of 1.0 was set for decoding.
Given that the Gemini-Pro-Vision model requires both text query and image input, thus we utilized
the Gemini-Pro model to handle reasoning in scenarios without visual context.
**3.2** **Evaluation**
**Problem-Solving Correctness** Given that VisAidMath comprises mathematics problems with
deterministic answers, we evaluate the correctness of problem-solving using the Accuracy score
as a fundamental metric. To accurately extract the final answer from model outputs, we follow
the approach of Lu et al. (2023) [7] and employ SOTA LLMs as answer extractors. Specifically,
we selected GPT-4 as the model for answer extraction, as it demonstrated a high success rate in
-----
Table 1: Accuracy scores on (p)CQ2VA task upon VisAidMath. Text-only input represents handling
pCQ2VA task, multi-modal input represents handling CQ2VA task. Meanings of all abbreviations are:
ALL Ñ overall accuracy, PLG Ñ plane geometry, SDG Ñ solid geometry, AYGÑ analytic geometry,
CAL: calculus and functions. Visual Aids Types: AXL Ñ auxiliary line, RTC Ñ rectangular
coordinate, THC Ñ rectangular three-dimensional coordinate, PLG Ñ plane geometry graph, SDG
Ñ solid geometry graph, FUG Ñ function graph. The highest scores in chunk and in general have
been emphasized with blue and to facilitate comparison.
Model ALL PLG SDG AYG CAL AXL RTC THC PLG SDG FUG
_Heuristics Baselines_
Random Answer 24.42 21.54 34.31 21.45 20.07 24.44 20.87 35.16 10.53 32.89 21.50
Frequent Answer 40.83 28.92 50.65 40.36 44.22 32.79 47.25 74.73 20.00 47.73 44.53
_Large Language Models (LLMs): Text-Only Input_
Llama2-7B 26.83 21.85 34.64 30.55 20.75 26.68 25.23 39.56 11.58 30.26 26.49
Mistral-7b-Instruct-v0.2 27.42 27.38 30.72 27.64 23.81 27.57 28.21 28.57 11.58 27.63 26.87
GPT3.5 37.58 32.31 42.16 37.45 38.78 37.56 38.30 40.66 13.68 42.11 38.20
GPT4 51.92 41.54 52.29 50.91 63.95 45.75 54.59 60.44 23.16 53.29 61.23
_Large Multimodal Models (LMMs): Text-Only Input_
LLaVA-Next-Mistral-7B 23.08 21.23 22.55 25.45 23.47 22.21 23.62 25.27 8.42 26.32 25.34
InternLM-XComposer2-VL 33.17 24.62 44.12 32.36 31.97 30.40 33.03 46.15 10.53 41.45 34.17
Qwen-VL-Plus 34.75 30.15 43.46 33.82 31.63 34.43 34.63 48.35 21.05 44.74 32.63
Gemini-Pro-Vision 38.42 31.08 48.37 31.27 42.86 34.72 37.84 49.45 18.95 51.97 39.54
Claude-3-Sonnet 38.58 31.38 43.46 39.27 40.82 36.66 40.14 46.15 14.74 43.42 42.23
GPT4V 47.00 35.08 47.06 50.55 56.80 41.43 50.69 48.35 15.79 47.37 55.66
_Large Multimodal Models (LMMs): Multimodal Input_
LLaVA-Next-Mistral-7B 24.58 22.77 24.18 27.64 24.15 23.55 24.54 29.67 9.47 25.00 25.91
InternLM-XComposer2-VL 29.00 21.54 32.68 31.64 30.95 26.97 30.73 37.36 10.53 35.53 32.05
Qwen-VL-Plus 32.00 28.62 35.95 33.45 30.27 32.34 33.49 32.97 21.05 42.11 32.05
Gemini-Pro-Vision 38.33 28.92 48.69 32.73 43.20 33.68 38.07 50.55 14.74 53.95 39.73
Claude-3-Sonnet 37.08 27.69 41.50 39.27 40.82 33.38 40.60 46.15 14.74 41.45 42.42
GPT4V 45.33 34.46 42.16 49.45 56.80 39.64 50.00 41.76 13.68 46.71 55.28
preliminary experiments with 200 examples. The prompts used to construct the answer extractor are
described in detail in Appendix F.5.
**Visual-Aided Reasoning Quality** We also evaluate the quality of visual aids and textual reasoning
steps generated by different models. To measure the similarity between the model output and the
reference, we calculate the N -gram overlap. In our experiments, we consider two kinds of similarity:
1) Similarity of visual aids; 2) Similarity of reasoning results (i.e., model answers). For both cases, we
assess the 4-gram overlap to determine the similarity between the model hypothesis and the reference.
Formulas of each similarity score can be found in Appendix G.3.
**3.3** **Main Results**
**Answer Correctness** The accuracy results for Task 1: Visual-Aided Reasoning are presented in
Table 1. While the GPT-4 series outperforms most models across all three modality settings in this
task, some open-source LLMs and LMMs perform below the random choice baseline on certain
tasks (24.42% accuracy). Specifically, Llama-2-7B achieves only 23.25% and 24.08% accuracy on
the pCQ2A and pCQpV2A tasks, respectively, and LLaVA-Next-Mistral-7B attains only 23.08%
accuracy on the pCQ2VA task. Furthermore, except for GPT-4 and GPT-4V, all other models achieve
lower performance than the most frequent choice baseline, indicating the significant challenge
presented by the collected math problems. Notably, only Llama-2-7B and GPT-3.5 perform better
on the (p)CQ2VA task compared to the other tasks, highlighting the deficiency of most models in
visual-aided reasoning.
|Model|ALL|PLG SDG AYG CAL|AXL RTC THC PLG SDG FUG|
|---|---|---|---|
|Random Answer Frequent Answer|24.42 40.83|21.54 34.31 21.45 20.07 28.92 50.65 40.36 44.22|24.44 20.87 35.16 10.53 32.89 21.50 32.79 47.25 74.73 20.00 47.73 44.53|
|---|---|---|---|
|Llama2-7B Mistral-7b-Instruct-v0.2 GPT3.5 GPT4|26.83 27.42 37.58 51.92|21.85 34.64 30.55 20.75 27.38 30.72 27.64 23.81 32.31 42.16 37.45 38.78 41.54 52.29 50.91 63.95|26.68 25.23 39.56 11.58 30.26 26.49 27.57 28.21 28.57 11.58 27.63 26.87 37.56 38.30 40.66 13.68 42.11 38.20 45.75 54.59 60.44 23.16 53.29 61.23|
|---|---|---|---|
|LLaVA-Next-Mistral-7B InternLM-XComposer2-VL Qwen-VL-Plus Gemini-Pro-Vision Claude-3-Sonnet GPT4V|23.08 33.17 34.75 38.42 38.58 47.00|21.23 22.55 25.45 23.47 24.62 44.12 32.36 31.97 30.15 43.46 33.82 31.63 31.08 48.37 31.27 42.86 31.38 43.46 39.27 40.82 35.08 47.06 50.55 56.80|22.21 23.62 25.27 8.42 26.32 25.34 30.40 33.03 46.15 10.53 41.45 34.17 34.43 34.63 48.35 21.05 44.74 32.63 34.72 37.84 49.45 18.95 51.97 39.54 36.66 40.14 46.15 14.74 43.42 42.23 41.43 50.69 48.35 15.79 47.37 55.66|
|---|---|---|---|
|LLaVA-Next-Mistral-7B InternLM-XComposer2-VL Qwen-VL-Plus Gemini-Pro-Vision Claude-3-Sonnet GPT4V|24.58 29.00 32.00 38.33 37.08 45.33|22.77 24.18 27.64 24.15 21.54 32.68 31.64 30.95 28.62 35.95 33.45 30.27 28.92 48.69 32.73 43.20 27.69 41.50 39.27 40.82 34.46 42.16 49.45 56.80|23.55 24.54 29.67 9.47 25.00 25.91 26.97 30.73 37.36 10.53 35.53 32.05 32.34 33.49 32.97 21.05 42.11 32.05 33.68 38.07 50.55 14.74 53.95 39.73 33.38 40.60 46.15 14.74 41.45 42.42 39.64 50.00 41.76 13.68 46.71 55.28|
|---|---|---|---|
-----
**Reasoning Quality** Figure 5a illustrates the low similarity between general reasoning and visualaided reasoning answers. This suggests that the task we designed differs significantly from general
reasoning tasks, and the visual-aided reasoning capability remains a crucial bottleneck for some
models. GPT-4V not only achieves high accuracy on the CQ2VA task but also demonstrates distinct
reasoning steps compared to the general mathematical problem-solving process. This indicates its
excellent comprehension of the visual-aided reasoning task. Figure 5b shows how well each model
deduces the visual aids. We observe that while GPT-4V achieves high accuracy, its reasoning steps for
visual aids deviate from the references. On the other hand, models like Gemini-Pro-Vision, InternLM,
and Claude-3-Sonnet are more likely to produce visual-aided reasoning steps similar to the references,
but their final answers are incorrect. These findings prompt us to analyze the specific errors in the
reasoning steps and how they lead to incorrect answers.
(a) N-gram similarity of Answer between general reasoning (CQ2A) and visual-aided reasoning
(CQ2VA).
(b) N-gram similarity of Visual Aids between model
hypothesis and reference (CQ2VA).
Figure 5: Quality of reasoning steps among different visual-aided reasoning capability models.
#### 4 Analysis
**4.1** **Quantitative Analysis**
**Reasoning Patterns** To uncover the extent to which current models overlook visual aids in the
reasoning process, we perform an analysis using the vanilla visual-aided reasoning setting (CQ2VA).
We randomly sample 200 model outputs with correct answers and analyze the causes that led to the
final resolution. We categorize and annotate the reasoning causes as follows:
- General: Correct reasoning without relying on visual aids.
- Arithmetic: Correct reasoning using pure arithmetic methods.
- Visual-Aided: Correct reasoning incorporating the use of visual aids.
- Backward: Correct reasoning derived from provided choices or the final conclusion.
- Hallucination.
As shown in Figure 6a, only 3.0% of the resolved questions benefit from generating visual aids,
indicating a negligible inclination towards visual-aided reasoning. Additionally, 19.3% of the cases
directly perform general reasoning steps without utilizing any visual aids. The majority of cases tend
to resolve problems using arithmetic methods (41.1%) or through false inference with hallucination
(33.2%). This distribution highlights the significant tendency of models to proceed with reasoning
along a text-only trajectory, disregarding the potential benefits of visual aids.
**Failure Analysis of Visual Aids Generation** Previous experimental results have shown low quality
in terms of generating visual aids. Therefore, we conducted a fine-grained analysis to determine the
causes of poor visual-aided reasoning steps in the CQ2VA task. We randomly sampled 200 outputs
-----
(a) GPT-4V reasoning patterns in visual-aided
reasoning with visual context (CQ2VA).
(b) GPT-4V error distributions during visualaided reasoning process (CQ2VA).
Figure 6: In-depth analysis on reasoning capability upon the utilization of visual information.
from GPT-4V and annotated the errors as follows: “Task Misunderstand”, “Reason Discrepancy”,
“Input-conflicting Hallucination”, “Fact-conflicting Hallucination”, “Context-conflicting Hallucination” [18], and “Correct” (representing generated visual aids that were reasonable). The visualization
of the error causes is shown in Figure 6b, where only 35.0% of the cases generated visual aids
correctly, and 22.0% produced other visual aids that served for different reasoning paths. Notably,
38.5% of the cases consisted of hallucinations in various forms, highlighting them as critical impact
for reasoning failure.
**Correlation between Visual-Aid and Hallucina-**
**tion** Visual aids serve as intermediate reasoning
steps within the visual dimension, revealing hidden
properties that facilitate specific reasoning paths and
reduce the overall difficulty of reasoning. To analyze
the effectiveness of visual aids in reducing hallucination, we asked annotators to evaluate 200 samples
from the previous analysis batch (see Section 4.1)
and estimate the severity of Visual-Aid Error and
_Hallucination. The correlation between visual aids_
and reasoning hallucination is presented in Figure
7a. Our findings indicate that correct visual aids can
effectively alleviate hallucinations during reasoning
and significantly increase the success rate of the reasoning process.
**Failure Analysis of Visual-Aided Reasoning** Finally, to investigate the underlying interconnection
between visual aids and final answers, we annotated
the Visual-Aid Error Types and Answer Correct**ness separately. It is observed that visual aids with**
significant disparities are more likely to result in reasoning collapse, as shown in Figure 7b. Notably,
when generated visual aids exhibit low error severity,
the quantity of correct answers does not significantly
exceed that of incorrect answers. Comparing Figure
7a with Figure 7b, we observe a strong relationship
between incorrect answers and hallucinations in both
reasoning and visual-aid generation.
(a) Correlation between visual aid and reasoning
hallucination.
(b) Correlation between errors of visual aid and
answer correctness.
(c) Correlation between error reasons of visual aid
and answer correctness.
Figure 7: Error analysis of visual-aided reasoning task (CQ2VA task, GPT-4V).
-----
**4.2** **Other Analysis**
To provide a comprehensive and fine-grained analysis of the model capability in different aspects, we
present experimental results within various categories. We conduct an analysis of result distribution
for the math category (see Appendix G.2.1), complexity (see Appendix G.2.2), and visual-aid type
(see Appendix G.2.3). Each category includes comprehensive visualizations for different tasks. In all
tasks, GPT-4V outperforms other models in each aspect, demonstrating a significant advantages in
visual-aided reasoning.
#### 5 Related Work
**Benchmark** Numerous benchmarks have been developed to evaluate mathematical reasoning abilities in both textual and multi-modal tasks. These benchmarks primarily rely on textual inference as
the reasoning tool. Regarding the text-only task, arithmetic problems with pure numerical expressions
[19] and MPS (Mathematical Problem Solving) [20] have been extensively explored. On the multimodal side, benchmarks focus on mathematical problems within the geometry category to foster
research on spatial understanding and properties deduction [21, 22]. Other multi-modal benchmarks
concentrate on general visual contexts, such as bar charts [6]. More recently, [7] established a
comprehensive benchmark that incorporates different visual contexts. However, these benchmarks
primarily rely on textual reasoning to solve mathematical problems, limiting the comprehensive
mathematical decision space to a singular text dimension. In contrast, humans tend to combine visual
and textual reasoning to exploit latent properties and ease the complexity of reasoning. Therefore, we
propose the VisAidMath benchmark, which incorporates visual aids in the reasoning process.
**LLMs for Mathematical Reasoning** LLMs have not achieved satisfactory performance on mathematical reasoning benchmarks under zero or few-shot settings [23]. To further enhance these models,
chain-of-thought reasoning is introduced for step-wise reasoning, and various prompt engineering
methods are provided to improve generation control [24]. In the multi-modal setting, LLMs can
leverage rich information from visual contexts for subsequent mathematical reasoning. Recent studies
[12, 17] explore reasoning over diverse figures that contain abundant numerical and spatial information. Interaction with external tools [25] and downstream instruction tuning [26] are also widely
employed to improve overall reasoning quality. Another relevant idea proposed by [13] explores
inter-connected text-vision reasoning by providing text content with contextually relevant images
sourced from a pre-defined dataset. However, contrary to our essential idea, these output images
are deliberately generated to enhance the readability of the output content, rather than improving
reasoning capabilities.
#### 6 Conclusion
In this paper, we lay the groundwork for mathematical problem solving using multi-modal reasoning
steps. We introduce VisAidMath, a benchmark designed to investigate the visual-aided reasoning
capabilities of both large language models and large multi-modal models. Experiments on mainstream
models demonstrate significant deficiencies in deducing visual aids and the corresponding textual
reasoning steps. The best performing LMM, GPT-4V, achieves only 45.33% accuracy in the visualaided reasoning task, indicating insufficient progress in this research direction. Most models exhibit
even lower performance when tasked with visual aids. Furthermore, GPT-4V lacks adequate visualaid generation capability, correctly inferring only 35% of samples with valid visual aids. We conduct
fine-grained quantitative and qualitative analyses to reveal disparities in visual-aid inference and
text-vision interconnected reasoning. These findings highlight a substantial research opportunity in
improving foundational models, data, and evaluation methodologies.
-----
#### References
[1] Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn
Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset.
_arXiv preprint arXiv:2103.03874, 2021._
[2] Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay
Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, et al. Solving
quantitative reasoning problems with language models. _Advances in Neural Information_
_Processing Systems, 35:3843–3857, 2022._
[3] Qinzhuo Wu, Qi Zhang, Jinlan Fu, and Xuan-Jing Huang. A knowledge-aware sequence-to-tree
network for math word problem solving. In Proceedings of the 2020 conference on empirical
_methods in natural language processing (EMNLP), pages 7137–7146, 2020._
[4] Yuhuai Wu, Markus N Rabe, Wenda Li, Jimmy Ba, Roger B Grosse, and Christian Szegedy.
Lime: Learning inductive bias for primitives of mathematical reasoning. In International
_Conference on Machine Learning, pages 11251–11262. PMLR, 2021._
[5] Adam Dahlgren Lindström and Savitha Sam Abraham. Clevr-math: A dataset for compositional
language, visual and mathematical reasoning. arXiv preprint arXiv:2208.05358, 2022.
[6] Ahmed Masry, Do Xuan Long, Jia Qing Tan, Shafiq Joty, and Enamul Hoque. Chartqa: A
benchmark for question answering about charts with visual and logical reasoning. arXiv preprint
_arXiv:2203.10244, 2022._
[7] Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao
Cheng, Kai-Wei Chang, Michel Galley, and Jianfeng Gao. Mathvista: Evaluating mathematical
reasoning of foundation models in visual contexts. arXiv preprint arXiv:2310.02255, 2023.
[8] Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei,
Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open
foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023.
[9] Gueyoung Jung, Matti A Hiltunen, Kaustubh R Joshi, Richard D Schlichting, and Calton Pu.
Mistral: Dynamically managing power, performance, and adaptation cost in cloud infrastructures. In 2010 IEEE 30th International Conference on Distributed Computing Systems, pages
62–73. IEEE, 2010.
[10] OpenAI. https://platform.openai.com/docs/models/gpt-3-5-turbo. In OpenAI. OpenAI, 2023.
[11] OpenAI. https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4. In OpenAI. OpenAI,
2023.
[12] Haotian Liu, Chunyuan Li, Yuheng Li, Bo Li, Yuanhan Zhang, Sheng Shen, and Yong Jae Lee.
Llava-next: Improved reasoning, ocr, and world knowledge, 2024.
[13] Xiaoyi Dong, Pan Zhang, Yuhang Zang, Yuhang Cao, Bin Wang, Linke Ouyang, Xilin Wei,
Songyang Zhang, Haodong Duan, Maosong Cao, et al. Internlm-xcomposer2: Mastering
free-form text-image composition and comprehension in vision-language large model. arXiv
_preprint arXiv:2401.16420, 2024._
[14] Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang
Zhou, and Jingren Zhou. Qwen-vl: A versatile vision-language model for understanding,
localization, text reading, and beyond. arXiv preprint arXiv:2308.12966, 1(2):3, 2023.
[15] Google. https://labelbox.com/product/model/foundry-models/google-gemini-pro-vision/. In
_Google. Google, 2023._
[16] AI Anthropic. The claude 3 model family: Opus, sonnet, haiku. Claude-3 Model Card, 2024.
[17] OpenAI. https://openai.com/index/gpt-4v-system-card/. In OpenAI. OpenAI, 2023.
[18] Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo
Zhao, Yu Zhang, Yulong Chen, et al. Siren’s song in the ai ocean: a survey on hallucination in
large language models. arXiv preprint arXiv:2309.01219, 2023.
-----
[19] Zheng Yuan, Hongyi Yuan, Chuanqi Tan, Wei Wang, and Songfang Huang. How well do large
language models perform in arithmetic tasks? arXiv preprint arXiv:2304.02015, 2023.
[20] Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, and Wenhu
Chen. Mammoth: Building math generalist models through hybrid instruction tuning. arXiv
_preprint arXiv:2309.05653, 2023._
[21] Jiaqi Chen, Tong Li, Jinghui Qin, Pan Lu, Liang Lin, Chongyu Chen, and Xiaodan Liang.
Unigeo: Unifying geometry logical reasoning via reformulating mathematical expression. arXiv
_preprint arXiv:2212.02746, 2022._
[22] Pan Lu, Ran Gong, Shibiao Jiang, Liang Qiu, Siyuan Huang, Xiaodan Liang, and Song-Chun
Zhu. Inter-gps: Interpretable geometry problem solving with formal language and symbolic
reasoning. arXiv preprint arXiv:2105.04165, 2021.
[23] Jack W Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song,
John Aslanides, Sarah Henderson, Roman Ring, Susannah Young, et al. Scaling language
models: Methods, analysis & insights from training gopher. arXiv preprint arXiv:2112.11446,
2021.
[24] Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large
language models are zero-shot reasoners. Advances in neural information processing systems,
35:22199–22213, 2022.
[25] Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan,
and Graham Neubig. Pal: Program-aided language models. In International Conference on
_Machine Learning, pages 10764–10799. PMLR, 2023._
[26] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. Advances
_in neural information processing systems, 36, 2024._
[27] larryflynt. https://github.com/larryflynt/image-concat. In Github, 2023.
[28] Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony
Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer,
Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain
Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. Transformers: State-of-the-art
natural language processing. In Qun Liu and David Schlangen, editors, Proceedings of the 2020
_Conference on Empirical Methods in Natural Language Processing: System Demonstrations,_
pages 38–45, Online, October 2020. Association for Computational Linguistics.
#### Checklist
1. For all authors...
(a) Do the main claims made in the abstract and introduction accurately reflect the paper’s
contributions and scope? [Yes]
(b) Did you describe the limitations of your work? [Yes] See Appendix E.
(c) Did you discuss any potential negative societal impacts of your work? [Yes] See
Appendix E.
(d) Have you read the ethics review guidelines and ensured that your paper conforms to
them? [Yes]
2. If you are including theoretical results...
(a) Did you state the full set of assumptions of all theoretical results? [N/A]
(b) Did you include complete proofs of all theoretical results? [N/A]
3. If you ran experiments (e.g. for benchmarks)...
(a) Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [No] Code will be
open sourced after paper acceptance. We will create a leaderboard for other researchers
to prevent data leakage.
-----
(b) Did you specify all the training details (e.g., data splits, hyperparameters, how they
were chosen)? [Yes] See Section F.1 for experiment settings.
(c) Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? [No] We randomly choose the random seed and perform
data sampling for analysis only. This will not compromise the experimental results
impartiality.
(d) Did you include the total amount of compute and the type of resources used (e.g., type
of GPUs, internal cluster, or cloud provider)? [Yes] We conduct experiments upon
open source models upon NVIDIA A100 40GB GPU on DGX server.
4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...
(a) If your work uses existing assets, did you cite the creators? [Yes] See Section 2.1 for
inference framework and Section D.2 for image concatenation tool.
(b) Did you mention the license of the assets? [Yes]
(c) Did you include any new assets either in the supplemental material or as a URL? [N/A]
(d) Did you discuss whether and how consent was obtained from people whose data you’re
using/curating? [N/A]
(e) Did you discuss whether the data you are using/curating contains personally identifiable
information or offensive content? [N/A]
5. If you used crowdsourcing or conducted research with human subjects...
(a) Did you include the full text of instructions given to participants and screenshots, if
applicable? [Yes] See Section 2
(b) Did you describe any potential participant risks, with links to Institutional Review
Board (IRB) approvals, if applicable? [N/A]
(c) Did you include the estimated hourly wage paid to participants and the total amount
spent on participant compensation? [Yes] See Section 2.1
-----
#### Contents
**A Future Direction** **15**
**B** **Dataset Analysis** **16**
B.1 Metadata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
B.2 Data Source . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
B.3 Examples for Different Categorizations . . . . . . . . . . . . . . . . . . . . . . . 18
B.3.1 Math Branch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
B.3.2 Visual Aid Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
B.3.3 Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
**C Dataset Collection Detail** **25**
C.1 Caption Writing Templates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
C.2 Dataset Creation Pipeline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
C.3 Human Annotation Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
**D Dataset Preparation** **26**
D.1 Machine Translation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
D.2 Data Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
D.3 Formalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
**E** **Limitations and Social Impact** **27**
**F** **Detail Experiment Settings** **27**
F.1 Hyperparameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
F.2 Reasoning Prompter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
F.3 Instruction across Answer Types . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
F.4 Visual Aid Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
F.5 Answer Extraction Prompter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
**G More Experimental Results** **32**
G.1 Results of other tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
G.2 Quantitative Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
G.2.1 Performance across Math Branches . . . . . . . . . . . . . . . . . . . . . 35
G.2.2 Performance across Complexities . . . . . . . . . . . . . . . . . . . . . . 39
G.2.3 Performance across Visual Aids . . . . . . . . . . . . . . . . . . . . . . . 43
G.3 N-gram Similarities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
**H In-depth Model Analysis** **49**
H.1 General Reasoning Tendency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
-----
H.2 Visual Aid Inference Capability . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
-----
#### A Future Direction
**Spatial Capability** Despite the predominant emphasis on the construction and fitting of extensive
datasets, mainstream works are confined to inference tasks within textual dimension. LLM exhibits
exceedingly poor performance in providing visual reasoning step, revealing deficiencies in spatial
understanding, imagination, and more other aspects. To address mathematical reasoning with visualaid inference, future investigation could be directed to specifically enhance LLM’s adaptation to
visual reasoning tasks, devise new methods for better integration of textual and visual reasoning, or
design specific architectures for multimodal reasoning steps inference.
**Mathematical Image Generation** Preliminary experiments find mainstream models exhibit poor
mathematical image generation performance, thus further captioning each mathematical images to
explore visual reasoning step inference. Primary model deficiencies fall in: mathematical image
caption comprehension, spatial relationships apprehension, lack of numerical precision, significant
stylization discrepancies in the images, and more. Generate image drawing code can increase the
overall drawing precision, while suffering from plenty of code errors. There lies a long research road
in mathematical image generation before fully exploration of textual-visual interconnected inference.
**Evaluation Metrics** Reasoning non-uniqueness enhances evaluation complexity of visual aids
generation. Different viewing angle, relative element size, and styles can alter perceptual features
instead of semantic feature. Visual-aid can be captioned by multiple correct expressions with semantic
remains stable. Therefore, future evaluation metrics research for visual-aid should be directed toward
semantic-based method.
-----
#### B Dataset Analysis
**B.1** **Metadata**
We list the manually annotated metadata for each sample in 2. “visual context” is optional depending
on whether image is provided along with the problem. “choices” is not empty when question form
belongs to multiple choices or true/false. “language” stamp the original language of each problem.
We also record the Chinese version text before machine translation with "zh_" prefix at the bottom of
each data sample.
**Category** **Detail**
question Text of mathematical problem
visual context: image path Relative path of visual context image
visual context: caption Caption of visual context image
visual aid: image path Relative path of visual aids image
visual aid: caption Caption of visual aids image
choices Options for multiple choice problems
question form Question form includes: multiple choice, true/false, free form
answer form Integer, decimal, fraction, and choice (for multiple choice and
true/false problems)
answer Answer of this mathematical problem
metadata: language Original language of this mathematical problem.
metadata: source Data source
metadata: math branch Mathematical branch
metadata: drawing type type of visual aids
Table 2: Detail metadata for VisAidMath
**B.2** **Data Source**
We analyze the problem complexity of 16 data source following three difficulty levels: 1) Easy:
Chinese High school entrance examination 2) Medium: Chinese College Entrance Examination 3)
High: Chinese Mathematical Olympiad. The complete complexity categorization of each source is
listed in 3. Particularly, since “AP Calculus” consists of both easy and medium level mathematical
problems, we consider questions satisfying one of the following conditions as medium level: 1)
involve coordinate axes rotation 2) cannot be resolved in one step leveraging Newton-Leibniz formula.
-----
**Data Source** **Detail**
High Textbook Chinese high school textbook
Middle Practice Chinese high school practice sheet
AP Easy AP calculus (categorized into Easy category)
Middle Simulate Chinese middle school simulated examination
AP Middle AP calculus (categorized into Medium category)
High Practice Chinese high school practice sheet
DSE Final HKDSE final examination
High Final Chinese high school final examination
High Simulate Chinese high school simulated examination
Math Analysis Demidovich Textbook Demidovich Problems in Mathematical Analysis
Analytic Geometry Lv Textbook Analytic geometry textbook written by Lingen Lv
CMO Final Chinese Mathematical Olympiad
CMO Practice Chinese Mathematical Olympiad practice sheet
AIME Final American Invitational Mathematics Examination
(AIME)
AMC 8 Practice American Mathematics Competition 8 (AMC 8)
AMC 10 Final American Mathematics Competition 10 (AMC 10)
Table 3: Detail of data sources
-----
**B.3** **Examples for Different Categorizations**
**B.3.1** **Math Branch**
**ID: 1114**
**Question: determine that the sum of the squares of the lengths of the three medians of a triangle**
is equal to the sum of the squares of the lengths of the three sides [3]4 [.]
**Visual Aids:**
**Caption of Visual Aids: Let the three vertices of a triangle be the midpoint of A, B, C, AB**
_F, AC, the midpoint of E, BC, and the midpoint of D, connecting AD, BE, CF_
Table 4: Example of mathematical problems of plane geometry.
**ID: 1006**
**Visual Context:**
**Caption of Visual Context: Two congruent triangles are glued together to obtain a hexahedron with all the**
dihedral angles equal, and the length of the shortest prong of the hexahedron is 2.
**Question:** Given that two congruent triangular pheons are glued together to obtain a hexahedron
with all the dihedral angles equal, and that the shortest prong of the hexahedron is 2, the distance between the
two farthest vertices is
**Visual Aids:**
**Caption of Visual Aids: Make a plane BDF K with edges AC intersecting AC at F**
Table 5: Example of mathematical problems of solid geometry.
-----
**ID: 619**
**Visual Context:**
**Caption of Visual Context:** In the trigonometry A ´ BCD, AB K AD, BC K BD,
the plane ABD K the plane BCD, points E, FpE do not coincide with A, D) are
on the prisms AD, BD and EF BD respectively. K AD . EF / /Plane ABC
**Question: In the trigonal pheasant A ´ BCD, AB K AD, BC K BD, plane ABD K plane BCD, points**
E, FpE and A, D do not coincide) on the prong AD, BD respectively, and EF K AD.EF{{ plane ABC
determine whether AD K AC .
**Visual Aids:**
**Caption of Visual Aids: Take a point Gon the line segment CD, connect FG, EGso that FG{{BC**
Table 6: Example of mathematical problems of analytic geometry.
**ID: 939**
**Question: Can the surface of a container formed by rotating the curve y “ Cx[4]about the vertical axis**
_Oycause the liquid surface to fall uniformly as it flows from the bottom of the container?_
**Visual Aids:**
**Caption of Visual Aids: In a geometric body obtained by rotating an upward parabola along the xaxis, dxis**
a line segment parallel to the xaxis, and the volume enclosed by it is drawn
Table 7: Example of mathematical problems of calculus and functions.
-----
**B.3.2** **Visual Aid Type**
**ID: 10**
**Visual Context:**
**Caption of Visual Context: In Rt △ABC, △BAC “ 90[˝], point D is the midpoint of BC, points E, F are**
points on AB, AC respectively, and ED K FD.
**Question: As shown in the figure, in Rt △ABC, =BAC “ 90[˝]**, the point D is the midpoint of
_BC, the point E, F is the point on AB, AC, and ED K FD . Can a triangle be formed with the segment_
_BE, EF, FC as its side? If so, determine the shape of the triangle._
**Visual Aids:**
**Caption of Visual Aids: Make BG{{FC, intersect with FD extension line at G, connect EG**
Table 8: Example of mathematical problem with auxiliary line as visual aid.
719
**Question:** Given that the vector _OA,[ÝÑ]_ _OB[ÝÝÑ] satisfies |OA[ÝÑ]|_ “ |OB[ÝÝÑ]| “ 1, _OA[ÝÑ]_ K ÝÝÑOB, _OC[ÝÝÑ]_
[ÝÑ]OA ` µ[ÝÝÑ]OBpλ, µ P Rq, if M is the midpoint of AB and |MC[ÝÝÑ]| “ 1 then the maximum value
_λ ` µ is ( )_
? ? ? ?
**Choices:** A. 1 ´ 3 B. 1 ` 2 C. 5 D. 1 ` 3 **Visual** **Aids:**
**Caption of Visual Aids: Establish a Cartesian coordinate systemDraw the vector** _OA,[ÝÑ]_ _OB[ÝÝÑ] M is the midpoint_
of AB
Table 9: Example of mathematical problem with rectangular coordinate system as visual aid.
-----
**ID: 1099**
**Visual Context:**
**Caption of Visual Context: A square ABCD ´ A1B1C1D1 has prism length 1, dO1 is the incircle of**
the square ABCD, dO2 is the outer circle of the square ADD1A1, and P, Q are the points on dO1, dO2
respectively.
**Question: As shown in the figure, the prisms of the square ABCD ´ A1B1C1D1 have the lengths 1, dO1**
as the tangent circle of the square ABCD, dO2 as the outer circle of the square ADD1A1, and P, Q as the
point on dO1, dO2 . Find the range of the length of PQ .
**Visual Aids:**
**Caption of Visual Aids: Let the origin Oand Dpoints coincide, along DD1 Establish the Zaxis in the**
DCdirection, establish the yaxis in the DAdirection, and establish the Xaxis in the DCdirection
Table 10: Example of mathematical problem with rectangular three-dimensional coordinate system
as visual aid.
**ID: 723**
?
**Question: Given that the plane vector ⃗a,[⃗]b,⃗c satisfies the angle between |⃗a| “ 1, |[⃗]b| “** 3,⃗a ¨ _[⃗]b “ 0,⃗c ´ ⃗a_
and ⃗c ´[⃗]b is _[π]6_ [, then the maximum value of][ ⃗c][ ¨ p][⃗b][ ´][ ⃗a][q][ is]
**Visual Aids:**
**Caption of Visual Aids: Make the tangent between circle o and circle c I1, I2, I3**
Table 11: Example of mathematical problem with geometry graph as visual aid.
-----
**ID: 220**
**Visual Context:**
**Caption of Visual Context: In the square SG1G2G3, E, F are the midpoints of G1G2, G2G3 respectively,**
and D is the midpoint of EF, connecting EF, SE, SF, SG2.
**Question: As shown in the figure, in the square SG1G2G3, E, F is the midpoint of G1G2, G2G3**
and D is the midpoint of EF respectively. If this square is folded into a tetrahedron along SE, SF and EF
so that the three points of G1, G2, G3 coincide, and the point of coincidence is labeled G, then which of the
edges of the tetrahedron SEFG are perpendicular to the faces of the tetrahedron?
**Visual Aids:**
**Caption of Visual Aids: Make a four prism S-EFG and unfold it with SG GE GF to form the original square**
_SG1G2G3_
Table 12: Example of mathematical problem with solid geometry as visual aid.
?
Through the parabola C : y[2] “ 2pxpp> 0q The focus of F, as the slope of 2
intersect at two points A, B, If _AF[ÝÑ] “ λ[ÝÝÑ]FB, λ “ p_ q
**Caption of Visual Aids: Let me draw the parabola C : y[2]** “ 2pxpp> 0q, focus F
and parabola Cintersect at two points A, B
Table 13: Example of mathematical problem with function graph as visual aid.
-----
**B.3.3** **Complexity**
**ID: 30**
**Data Source: Chinese Middle School Practice**
**Visual Context:**
**Caption of Visual Context: Figure: In △ABC, BD “ DC “ AC, AE is the center line of △ADC.**
**Question: It is known that, as shown in the figure: △ABC, BD “ DC “ AC, AE is the cen-**
ter line of △ADC : judge if: AB “ 3AE .
**Visual Aids:**
**Caption of Visual Aids: Extend AE to F, connect EF “ AE to DF**
Table 14: Example of mathematical problem classified into “Easy” category
**ID: 650**
**Data Source: HKDSE Examination**
**Visual Context:**
**Caption of Visual Context: AB is a vertical pole standing on the horizontal ground BCD, where**
=CBD “ 90[˝]
**Question:** In the figure, AB is a vertical pole standing on the horizontal ground BCD, where
=CBD “ 90[˝]. If the angle between the plane ACD and the horizontal ground is θ, then tan θ “
**Visual Aids:**
**Caption of Visual Aids: From point B to point CD, the perpendicular intersects point CD at point E,**
connecting AE, BE
Table 15: Example of mathematical problem classified into “Medium” category
-----
**ID: 1024**
**Data Source: Chinese Mathematical Olympiad (CMO)**
**Visual Context:**
**Caption of Visual Context: In the right triangular prism ABC ´ A1B1C1, AB “ AC “ 5, D, E are the**
midpoints of BC, BB1 respectively, and the quadrilateral B1BCC1 is a square with side length 6.
**Question:** As shown in the figure, in the rectangular triangular prism ABC ´ A1B1C1,
_AB “ AC “ 5, D, E are the midpoints of BC, BB1 and the quadrilateral B1BCC1 is a square_
of side 6. determine that: A1B{{ is in the plane of AC1D .
**Visual Aids:**
**Caption of Visual Aids: Connect A1C, intersect with AC1 at O point, connect OD**
Table 16: Example of mathematical problem classified into “Hard” category
-----
#### C Dataset Collection Detail
**C.1** **Caption Writing Templates**
Mathematical graphs are consists of shapes and elements bound with specific relation or theorem. To
reduce manual annotation work and enhance caption consistency, we standardize the caption writing
for visual context and visual aids by defining templates for certain visual elements. The annotators
should caption image referring to these templates as listed in 17.
**Category** **Detail**
Auxiliary Line 1.1 Connect the points a,b to make the line
segment
1.2 Connect points a,b and extend them to
intersect the CD line at point E.
1.3 Make a vertical line AB through point a.
Rectangular Three-Dimensional Co2.1 Establish a three-dimensional rectangular
ordinate System
coordinate system with point o as the origin and oa as the x-axis positive direction
and ob as the y-axis and oc as the z-axis
positive direction.
Rectangular Coordinate System
3.1 Take point o as the origin oa as the xaxis positive direction ob as the y-axis
Establish a right-angle coordinate system
3.2 With point o as origin oa as x-axis positive direction y-axis positive direction as
x-axis rotated 90 degrees counterclockwise/clockwise
Function Graph
4.1 Draw the y[2] “ ax image in the real coordinate system with 0 as the origin.
4.2 Draw the y[2] “ ax image in the real coordinate system with 0 as the origin and the
y=ax image intersecting at point k in the
first quadrant and at point D in the second
quadrant.
Riemann integral problem
5.1 Draw the graph of [function] function
in interval [interval] and draw the Riemann integral graph in units of [unit]
with values on the right/left/middle side.
Table 17: Caption writing templates of various visual aid types for annotators’ reference.
-----
**C.2** **Dataset Creation Pipeline**
Assign Collection Task Annotation Feedback
Administrator
Verifier
Validate
& Refine
n/N data
Feedback
Collector Annotator
Web Search PDF + TEXn/N data LaTeX Refinement n/N dataJson Preparation
Label Balancing AnnotationMetadata
Difficulty Validate Main Data Metadata
Verification & Refine Annotation Validation Preprocess
ConversionPDF to TEX Formalization
Verifier Data Distribution Validate Machine
Analysis & Refine Translation
Verifier
Figure 8: Pipeline invloving data collection, annotation and verification.
**C.3** **Human Annotation Interface**
Figure 9: Annotation tool for interactive labeling
#### D Dataset Preparation
**D.1** **Machine Translation**
Since we have collected most data from Chinese sources and annotated captions in Chinese, we need
to uniformly translate all Chinese text into English. Open source machine translation (MT) models
-----
often fail in semantic understanding and numerical transformation under mathematical domain.
Therefore we leverage close source machine translation service (Baidu Translation[2] and DeepL[3]),
distributing different parts of each sample data to the two platform to alleviate data leakage.
**D.2** **Data Process**
We conduct further data process upon annotated data, adapting to model input interface and evaluation
uniqueness. Since many models accept only one image in single generation round, we leverage
tool [27] to merge images of visual context and visual aid respectively if multiple images exist in
single data. We convert decimal answers to three decimal places, and transform fractional answers into
the form with ’/’ as separator, where the integer is the numerator and the denominator. Mathematical
problems with free form answers are reshaped as multiple choices or true/false problem, with
particular choice as the answer. We also provide a final validation and modification round focus on
the caption of visual context and visual aid, regarding description completeness.
**D.3** **Formalization**
Each sample data is finalized into certain dir with one folder for visual context images, one folder for
visual aid images, and one “data.json” file. Each image are named as the order in which it appears
in problem or reasoning (eg. 1.png, 2.png). The Merged image is named as “concatenate.png”.
“data.json” file stores all text and metadata, including untranslated text within specific data. And we
finally formalized the dataset under CC-BY-SA-4.0 license.
#### E Limitations and Social Impact
The limitation of VisAidMath is three-folded. First, dataset is restricted to 1200 samples since
both collection, annotation and verification of mathematical problems acquire heavy manual work to
satisfy dataset principles. Such mathematical problems with visual aids cost more human efforts to
understand each segment before judgment. Secondly, deficiency of mainstream machine translation
systems in mathematical domain could introduce various translation errors, thus enhancing complexity
for problem solving and subsequent evaluation. Thirdly, we cannot conduct comprehensive analysis
of visual-aided reasoning with image generation, since current LMMs remain significant deficiency
in mathematical image generation. No negative social impact will be provided from our math-centric
work, expecting only to enhance further understanding of LLM reasoning.
#### F Detail Experiment Settings
**F.1** **Hyperparameters**
We utilize the default inference settings for each LLMs and LMMs in our experiments. Only specific
hyperparameters that are necessary to clarify are listed in Table 18 and 19. We conduct open source
model inference based on [28].
**F.2** **Reasoning Prompter**
We list the ICL prompts for assigning different models to perform reasoning under six task settings in
mathematical domain: 1) CQ2A 2) CQ2VA 3) CQpV2A 4) pCQ2A 5) pCQ2VA 6) pCQpV2A. Task
instructions for each task are listed in Table 20. ICL examples can be found at our code open sourced
after acceptance.
2https://fanyi.baidu.com/
3https://www.deepl.com/translator
-----
**Model** **Hyperparameters**
GPT3.5 model = gpt-3.5-turbo, temperature = 0, max_tokens =
4000
GPT-4 model = gpt-4-turbo-2024-04-09, temperature = 0,
max_tokens = 4000
Qwen-VL-Plus model = qwen-vl-plus, temperature = 0, max_tokens =
4000
Gemini-Pro-Vision model = `gemini-pro-vision,` temperature = 0,
max_tokens = 4000
Claude-3-Sonnet model = claude-3-sonnet, temperature = 0, max_tokens
= 4000
GPT4V model = gpt-4-vision-preview, temperature = 0,
max_tokens = 4000
Table 18: Hyperparameter for close LLMs and LMMs in experiments.
**Model** **Hyperparameters**
Llama2-7B model = Llama-2-7b-chat-hf, precision = bfloat16, temperature = 0, max_tokens = 4000
Mistral-7b-Instruct-v0.2 model = `mistral-7b-instruct-v0.2,` precision =
bfloat16, temperature = 0, max_tokens = 4000
LLaVA-Next-Mistral-7B model = llava-v1.6-mistral-7b-hf, precision = float16,
temperature = 0, max_tokens = 4000
InternLM-XComposer2-VL model = internlm-xcomposer2-vl-7b, precision =
float16, temperature = 0, max_tokens = 4000
Table 19: Hyperparameter for open LLMs and LMMs in experiments.
**F.3** **Instruction across Answer Types**
To facilitate accuracy evaluation, each sample is bound with with non-ambiguous result across integer,
fraction, decimal and choice. We define choice as answer type for multiple choice and true/false
problems. The specific task instruction for each output answer type is shown in Table 21.
**F.4** **Visual Aid Extraction**
Reasoning prompter instruct models to provide visual aids, reasoning and answer following certain
format. We can directly extract visual aids and followup reasoning from output generated by LLM
and LMMs with sufficient zero-shot reasoning capability (eg. GPT-4, GPT-4V). However, few weaker
models (eg. llama2, LLaVA-Next-Mistral-7B) fail to provide standardized output. We regard these
output as both visual-aid and followup reasoning for later answer extraction and similarity evaluation
respectively.
**F.5** **Answer Extraction Prompter**
Answer extraction is conducted upon the extracted reasoning text from model output. We provide
five ICL examples instructing GPT-4 to extract answer from varied model outputs. Examples are
shown in Table 22.
.
-----
**Answer Type** **With Visual Context**
CQ2A Please read the following math problem and the related image. After that,
CQ2VA Please read the following math problem and the related image, then conceive
the additional mathematical diagram or visual aid upon provided image to help
illustrate the problem, clarify the problem or assist in finding solution. The
drawing shape includes auxiliary line, plane geometry graph, solid geometry graph, function graph, plane coordinate and three-dimensional coordinate.
These additional drawings can enhance understanding of the problem and
potentially find new insights or approaches to solving it. Write drawing description of these additional diagram in text, and express math formula with
tex gramma. After that, reason based on the drawing description,
CQpV2A Please read the following math problem and the related image. Read the
provided drawing description of additional mathematical diagram or visual
aid upon provided image in latex format, which helps illustrate the problem,
clarify the problem or assist in finding solution. The drawing shape includes
auxiliary line, plane geometry graph, solid geometry graph, function graph,
plane coordinate and three-dimensional coordinate. These additional drawings
can enhance understanding of the problem and potentially find new insights or
approaches to solving it. After that, based on the drawing description,
pCQ2A Please read the following math problem and captions of related visual context.
After that,
pCQ2VA Please read the following math problem and captions of related visual context,
then conceive the additional mathematical diagram or visual aid upon provided
image to help illustrate the problem, clarify the problem or assist in finding
solution. The drawing shape includes auxiliary line, plane geometry graph,
solid geometry graph, function graph, plane coordinate and three-dimensional
coordinate. These additional drawings can enhance understanding of the
problem and potentially find new insights or approaches to solving it. Write
drawing description of these additional diagram in text, and express math
formula with tex gramma. After that, reason based on the drawing description,
pCQpV2A Please read the following math problem and captions of related visual context.
Read the provided drawing description of additional mathematical diagram
or visual aid upon provided image in latex format, which helps illustrate
the problem, clarify the problem or assist in finding solution. The drawing
shape includes auxiliary line, plane geometry graph, solid geometry graph,
function graph, plane coordinate and three-dimensional coordinate. These
additional drawings can enhance understanding of the problem and potentially
find new insights or approaches to solving it. After that, based on the drawing
description,
Table 20: Instructions for different mathematical problem solving tasks with visual context inside
input.
-----
**Answer Type** **Task Instruction**
Integer Answer the question requiring a integer answer in latex format and provide the final value at the end (e.g., -1, 0, 1, 2, 3).
Decimal Aswer the question requiring a decimal answer in latex format and provide the final value at the end. Make sure the
decimal answer is a floating-point number with three decimal
place (e.g., 1.234, 2.345, 3.456).
Fraction Answer the question requiring an fraction answer in latex
format and provide the final value at the end. Make sure the
fraction answer use ’/’ as fraction bar and both numerator
and denominator are integers (e.g., 1/2, 2/3, 3/4).
Choice (Multiple-Choice) Answer the question in latex format and provide the correct
option at the end. Make sure the option answer can only be
single capital letter (e.g., A, B, C, D).
Choice (True/False) Answer the question in latex format and provide the correct
option at the end. Make sure the option answer can only be
word ‘True’ or ‘False’.
Table 21: Instructions for mathematical problem solving toward different answer types.
-----
**Element** **Prompt**
**Instruction: Answer the question requiring a integer answer and provide the final value**
at the end (e.g., -1, 0, 1, 2, 3).
**Question: Acute Angle △ABC, BEvertical ACat E, CDvertical ABat D, given**
acute Angle △ABC, BEvertical ACat E, BCD vertical Ain D, BC “ 25, CE “ 7,
BD “ 15, BE, CDto H, connection DE, DEfor circle diameter, and ACto another F,
Then the length of AF is
Example 1
Example 2
Example 3
**Model response: Connect DF, then =DFE “ =BEC “ 90[˝], so DF{{BE. In Rt**
_△BEC, BC “ 25, CE “ 7, so BE “ 24, cos =ACB “_ 257 [,][ sin][ =][ACB][ “] 2425 [, in]
Rt △BCD, BC “ 25, BD “ 15, so cos =ABC “ [3]5 _[,][ sin][ =][ABC][ “][ 4]5_ [. So][ sin][ A][ “]
sinp=ABC ` =ACBq “ sin =ABC cos =ACB ` cos =ABC sin =ACB “ 45 [,]
in Rt △ABE, sin A “ _BEAB_ [“] _AB24_ [“] 45 [, we get][ AB][ “][ 30][, and][ BD][ “][ 15][, so the]
point D is the midpoint of AB, and DF is the median of △ABE, so AF “ [1]2 _[AE][ “]_
21 ?30[2] ´ 24[2] “ 9. So the final answer is 9
**Extracted answer: 9**
**Instruction: Answer the question requiring an fraction answer and provide the final value**
at the end. Make sure the fraction answer use ’/’ as fraction bar and both numerator and
denominator are integers (e.g., 1/2, 2/3, 3/4).
?
**Question: Linear x ` y “** 3a and round x[`]y[2]2 “ a[2] ` pa ´ 1q[2]intersect at point of
_a,B, Ois the origin of coordinates, if_
_triangleisaOBis triangle, the real value of a_
**Model response: From the question, the radius of the circle?** _r “_ aa[2] ` p?3aa ´| 1q[2]; the
distance from the centre of the circle to the line x ` y ´ 3a “ 0 d “ [|] ?2 [; because]
_△AOB is a positive triangle, so d “_ ?23 _[r][, i.e.]_ |??32a| “ ?23 aa[2] ` pa ´ 1q[2], which
solves a “ [1]2
**Instruction: Answer the question requiring a decimal answer and provide the final value**
at the end. Make sure the decimal answer is a floating-point number with three decimal
place (e.g., 1.234, 2.345, 3.456)
**Question: In a right angle △ABC, =DAC “ 45[˝], AD “ 13, AB “ 12. Calculate the**
length of CD.
**Model response: Let DE “ x, then CE “ AE “ 13 ` x,** 1312`x [“] _x5_ _x “_ 657 [,]
_DE “_ [65]7 _CE “_ [156]7 [,][ CD][ “][ sqrtDE][3][ `][ CE][2][ “][ 169]7 “ 24.1
**Extracted answer: 24.143**
**Extracted answer: 1/2**
Table 22: 3 ICL examples leveraged to prompt GPT-4 for answer extraction. Full 5 examples will be
listed in code open sourced after acceptance.
-----
Figure 10
#### G More Experimental Results
**G.1** **Results of other tasks**
-----
Model ALL PLG SDG AYG CAL AXL RTC THC PLG SDG FUG
_Heuristics Baselines_
Random Answer 24.42 21.54 34.31 21.45 20.07 24.44 20.87 35.16 10.53 32.89 21.50
Frequent Answer 40.83 28.92 50.65 40.36 44.22 32.79 47.25 74.73 20.00 47.73 44.53
_Large Language Models (LLMs): Text-Only Input_
Llama2-7B 23.25 22.77 29.74 17.82 22.11 22.80 19.72 28.57 8.42 28.29 21.11
Mistral-7b-Instruct-v0.2 25.58 24.31 29.41 25.09 23.47 24.59 25.46 25.27 6.32 26.32 25.91
GPT3.5 37.75 32.62 44.44 33.82 40.14 37.85 38.30 40.66 17.89 43.42 38.20
GPT4 51.17 41.54 47.39 50.91 65.99 45.45 55.73 59.34 22.11 49.34 61.80
_Large Multimodal Models (LMMs): Text-Only Input_
LLaVA-Next-Mistral-7B 28.83 26.15 35.29 24.36 29.25 27.72 28.67 30.77 10.53 35.53 28.79
InternLM-XComposer2-VL 34.33 28.00 45.75 28.36 35.03 32.64 33.49 53.85 13.68 36.18 33.40
Qwen-VL-Plus 33.00 34.15 39.54 29.09 28.57 34.87 30.05 34.07 13.68 43.42 30.52
Gemini-Pro-Vision 40.33 31.38 50.98 35.27 43.88 36.66 41.28 53.85 17.89 49.34 41.84
Claude-3-Sonnet 38.83 27.08 47.06 38.18 43.88 33.38 39.68 57.14 16.84 48.03 42.80
GPT4V 49.00 42.46 46.08 48.73 59.52 43.82 52.75 52.75 22.11 50.00 57.58
_Large Multimodal Models (LMMs): Multimodal Input_
LLaVA-Next-Mistral-7B 30.08 27.08 37.91 25.82 29.25 28.32 29.13 41.76 9.47 34.87 28.98
InternLM-XComposer2-VL 33.17 26.77 39.87 32.00 34.35 30.85 32.80 43.96 11.58 36.84 34.93
Qwen-VL-Plus 30.58 29.23 35.62 28.73 28.57 31.15 29.82 27.47 13.68 41.45 30.52
Gemini-Pro-Vision 39.00 27.38 49.02 36.36 43.88 35.32 40.37 52.75 14.74 48.68 42.03
Claude-3-Sonnet 39.33 30.15 46.41 37.45 43.88 34.72 38.99 56.04 16.84 47.37 42.42
GPT4V 49.08 41.54 47.39 48.73 59.52 43.82 53.21 51.65 24.21 51.97 57.97
Table 23: Accuracy scores on (p)CQ2A task upon VisAidMath. Text-only input represents handling
pCQ2A task, multi-modal input represents handling CQ2A task. Meanings of all abbreviations are
listed below. ALL Ñ overall accuracy. For Mathematical Branch: PLG Ñ plane geometry, SDG Ñ
solid geometry, AYGÑ analytic geometry, CAL: calculus and functions. Visual Aids Types: AXL
Ñ auxiliary line, RTC Ñ rectangular coordinate, THC Ñ rectangular three-dimensional coordinate,
PLG Ñ plane geometry graph, SDG Ñ solid geometry graph, FUG Ñ function graph. The highest
scores have been bolded to facilitate comparison.
|Model|ALL|PLG SDG AYG CAL|AXL RTC THC PLG SDG FUG|
|---|---|---|---|
|Random Answer Frequent Answer|24.42 40.83|21.54 34.31 21.45 20.07 28.92 50.65 40.36 44.22|24.44 20.87 35.16 10.53 32.89 21.50 32.79 47.25 74.73 20.00 47.73 44.53|
|---|---|---|---|
|Llama2-7B Mistral-7b-Instruct-v0.2 GPT3.5 GPT4|23.25 25.58 37.75 51.17|22.77 29.74 17.82 22.11 24.31 29.41 25.09 23.47 32.62 44.44 33.82 40.14 41.54 47.39 50.91 65.99|22.80 19.72 28.57 8.42 28.29 21.11 24.59 25.46 25.27 6.32 26.32 25.91 37.85 38.30 40.66 17.89 43.42 38.20 45.45 55.73 59.34 22.11 49.34 61.80|
|---|---|---|---|
|LLaVA-Next-Mistral-7B InternLM-XComposer2-VL Qwen-VL-Plus Gemini-Pro-Vision Claude-3-Sonnet GPT4V|28.83 34.33 33.00 40.33 38.83 49.00|26.15 35.29 24.36 29.25 28.00 45.75 28.36 35.03 34.15 39.54 29.09 28.57 31.38 50.98 35.27 43.88 27.08 47.06 38.18 43.88 42.46 46.08 48.73 59.52|27.72 28.67 30.77 10.53 35.53 28.79 32.64 33.49 53.85 13.68 36.18 33.40 34.87 30.05 34.07 13.68 43.42 30.52 36.66 41.28 53.85 17.89 49.34 41.84 33.38 39.68 57.14 16.84 48.03 42.80 43.82 52.75 52.75 22.11 50.00 57.58|
|---|---|---|---|
|LLaVA-Next-Mistral-7B InternLM-XComposer2-VL Qwen-VL-Plus Gemini-Pro-Vision Claude-3-Sonnet GPT4V|30.08 33.17 30.58 39.00 39.33 49.08|27.08 37.91 25.82 29.25 26.77 39.87 32.00 34.35 29.23 35.62 28.73 28.57 27.38 49.02 36.36 43.88 30.15 46.41 37.45 43.88 41.54 47.39 48.73 59.52|28.32 29.13 41.76 9.47 34.87 28.98 30.85 32.80 43.96 11.58 36.84 34.93 31.15 29.82 27.47 13.68 41.45 30.52 35.32 40.37 52.75 14.74 48.68 42.03 34.72 38.99 56.04 16.84 47.37 42.42 43.82 53.21 51.65 24.21 51.97 57.97|
|---|---|---|---|
-----
Model ALL PLG SDG AYG CAL AXL RTC THC PLG SDG FUG
_Heuristics Baselines_
Random Answer 24.42 21.54 34.31 21.45 20.07 24.44 20.87 35.16 10.53 32.89 21.50
Frequent Answer 40.83 28.92 50.65 40.36 44.22 32.79 47.25 74.73 20.00 47.73 44.53
_Large Language Models (LLMs): Text-Only Input_
Llama2-7B 24.08 21.23 31.05 25.82 18.37 25.04 22.71 31.87 7.37 30.26 22.46
Mistral-7b-Instruct-v0.2 28.33 27.69 33.33 24.73 27.21 27.72 27.29 34.07 14.74 32.89 27.26
GPT3.5 36.33 31.08 39.22 34.91 40.48 33.08 37.84 50.55 14.74 39.47 39.73
GPT4 52.17 42.77 49.02 53.09 64.97 46.94 57.11 54.95 20.00 52.63 62.76
_Large Multimodal Models (LMMs): Text-Only Input_
LLaVA-Next-Mistral-7B 27.67 27.38 33.99 24.36 24.49 27.42 25.00 29.67 11.58 33.55 25.91
InternLM-XComposer2-VL 33.50 28.31 43.46 32.36 29.93 33.68 32.80 49.45 13.68 41.45 31.86
Qwen-VL-Plus 35.42 31.69 40.85 38.18 31.29 36.51 39.22 40.66 15.79 39.47 34.93
Gemini-Pro-Vision 42.92 32.31 51.96 40.73 47.28 39.79 43.35 57.14 17.89 47.37 45.87
Claude-3-Sonnet 39.00 31.38 42.16 41.45 41.84 35.92 40.14 46.15 17.89 42.11 43.19
GPT4V 47.58 40.31 47.71 42.55 60.20 42.32 47.94 50.55 21.05 55.26 53.93
_Large Multimodal Models (LMMs): Multimodal Input_
LLaVA-Next-Mistral-7B 27.08 27.69 32.03 23.64 24.49 27.42 24.31 26.37 11.58 32.89 25.72
InternLM-XComposer2-VL 30.42 20.00 39.54 33.09 29.93 26.97 31.88 40.66 10.53 34.87 32.25
Qwen-VL-Plus 32.58 31.69 30.39 37.45 31.29 33.23 38.99 25.27 16.84 37.50 34.55
Gemini-Pro-Vision 41.42 29.54 48.69 41.09 47.28 37.85 43.81 45.05 14.74 48.03 46.07
Claude-3-Sonnet 36.67 24.92 39.22 42.18 41.84 32.04 40.37 41.76 14.74 43.42 43.76
GPT4V 44.17 37.54 37.25 42.91 59.86 38.60 47.25 36.26 17.89 48.03 53.74
Table 24: Accuracy scores on (p)CQpV2A task upon VisAidMath. Text-only input represents
handling pCQpV2A task, multi-modal input represents handling CQpV2A task. Meanings of all
abbreviations are listed below. ALL Ñ overall accuracy. For Mathematical Branch: PLG Ñ plane
geometry, SDG Ñ solid geometry, AYGÑ analytic geometry, CAL: calculus and functions. Visual
Aids Types: AXL Ñ auxiliary line, RTC Ñ rectangular coordinate, THC Ñ rectangular threedimensional coordinate, PLG Ñ plane geometry graph, SDG Ñ solid geometry graph, FUG Ñ
function graph. The highest scores have been bolded to facilitate comparison.
|Model|ALL|PLG SDG AYG CAL|AXL RTC THC PLG SDG FUG|
|---|---|---|---|
|Random Answer Frequent Answer|24.42 40.83|21.54 34.31 21.45 20.07 28.92 50.65 40.36 44.22|24.44 20.87 35.16 10.53 32.89 21.50 32.79 47.25 74.73 20.00 47.73 44.53|
|---|---|---|---|
|Llama2-7B Mistral-7b-Instruct-v0.2 GPT3.5 GPT4|24.08 28.33 36.33 52.17|21.23 31.05 25.82 18.37 27.69 33.33 24.73 27.21 31.08 39.22 34.91 40.48 42.77 49.02 53.09 64.97|25.04 22.71 31.87 7.37 30.26 22.46 27.72 27.29 34.07 14.74 32.89 27.26 33.08 37.84 50.55 14.74 39.47 39.73 46.94 57.11 54.95 20.00 52.63 62.76|
|---|---|---|---|
|LLaVA-Next-Mistral-7B InternLM-XComposer2-VL Qwen-VL-Plus Gemini-Pro-Vision Claude-3-Sonnet GPT4V|27.67 33.50 35.42 42.92 39.00 47.58|27.38 33.99 24.36 24.49 28.31 43.46 32.36 29.93 31.69 40.85 38.18 31.29 32.31 51.96 40.73 47.28 31.38 42.16 41.45 41.84 40.31 47.71 42.55 60.20|27.42 25.00 29.67 11.58 33.55 25.91 33.68 32.80 49.45 13.68 41.45 31.86 36.51 39.22 40.66 15.79 39.47 34.93 39.79 43.35 57.14 17.89 47.37 45.87 35.92 40.14 46.15 17.89 42.11 43.19 42.32 47.94 50.55 21.05 55.26 53.93|
|---|---|---|---|
|LLaVA-Next-Mistral-7B InternLM-XComposer2-VL Qwen-VL-Plus Gemini-Pro-Vision Claude-3-Sonnet GPT4V|27.08 30.42 32.58 41.42 36.67 44.17|27.69 32.03 23.64 24.49 20.00 39.54 33.09 29.93 31.69 30.39 37.45 31.29 29.54 48.69 41.09 47.28 24.92 39.22 42.18 41.84 37.54 37.25 42.91 59.86|27.42 24.31 26.37 11.58 32.89 25.72 26.97 31.88 40.66 10.53 34.87 32.25 33.23 38.99 25.27 16.84 37.50 34.55 37.85 43.81 45.05 14.74 48.03 46.07 32.04 40.37 41.76 14.74 43.42 43.76 38.60 47.25 36.26 17.89 48.03 53.74|
|---|---|---|---|
-----
**G.2** **Quantitative Analysis**
**G.2.1** **Performance across Math Branches**
Figure 11, 12, and 13 illustrate the accuracy scores of LMMs upon three tasks with image input
across four math branches. GPT4V outperforms other models in problems within “plane geometry”,
“analytic geometry”, and “calculus and functions” branches. Gemini-Pro-Vision achieves the highest
score on solid geometry problems. Notably Claude-3-Sonnet and InternLM-XComposer2-VL both
achieves comparable results toward GPT4V in “solid geometry” branch when reason with provided
visual-aided, exhibiting robustness and enhanced capabilities in spatial understanding and visual
reasoning under “solid geometry”. GPT4V underperforms in visual-aided reasoning, exhibiting
significant deficiency processing implicit visual information. Gemini-Pro-Vision performs better at
“analytic geometry” and “calculus and functions” with provided visual aids, demonstrating better
understanding of visual context within these mathematical branches.
Figure 11: Accuracies of LLMs and LMMs upon CQ2A (General Text-Only Reasoning) task across
math branches.
The accuracy scores of both LLMs and LMMs on three tasks with image caption for visual context
across math branches is shown in Figure 14, 15, and 16. In text-only reasoning, GPT4 outperforms
other models across most branches and tasks. GPT4V and Gemini-Pro-Vision achieve the highest
score at “plane geometry” and “solid geometry” respectively in pCQ2A task. Gemini-Pro-Vision
also attains the best score at “solid geometry” branch, manifesting robust visual context utilization
within “solid geometry” branch. GPT4V and GPT4 share similar performances over “plane geometry”
problems. However, performance of these two models deviates when conduct reasoning with provided
visual aids. However, GPT-4V achieves significantly lower accuracy on “analytic geometry” branch,
demonstrating difficulty for GPT-4V to handle additoinal visual information on “analytic geometry”
branch. Gemini-Pro-Vision outperform other models after introduce visual aids into reasoning at
“solid geometry” category, indicating better three-dimensional spatial information understanding and
processing. Overall, more models perform reasoning better with visual aids on “solid geometry” and
“analytic geometry” problems, possibly because visual aids in these problems are bound tighter to the
reasoning path. In opposite, visual aids within “plane geometry” problems still often leave broad
-----
Figure 12: Accuracies of LLMs and LMMs upon CQ2VA (Visual-Aided Reasoning) task across
math branches.
Figure 13: Accuracies of LLMs and LMMs upon CQpV2A (Reasoning with Provided Visual Aids)
task across math branches.
-----
decision space, thus making it harder to utilize the additonnal visual context. Since various problems
of “calculus and functions” can often be solved by generic method, most models maintain the highest
accuracy in this branch.
Figure 14: Accuracies of LLMs and LMMs upon pCQ2A (General Text-Only Reasoning) task across
math branches.
-----
Figure 15: Accuracies of LLMs and LMMs upon pCQ2VA (Visual-Aided Reasoning) task across
math branches.
Figure 16: Accuracies of LLMs and LMMs upon pCQpV2A (Reasoning with Provided Visual Aids)
task across math branches.
-----
**G.2.2** **Performance across Complexities**
Figure 17: Accuracies of LLMs and LMMs upon CQ2A (General Text-Only Reasoning) task across
complexity levels.
-----
Figure 18: Accuracies of LLMs and LMMs upon CQ2VA (Visual-Aided Reasoning) task across
complexity levels.
Figure 19: Accuracies of LLMs and LMMs upon CQpV2A (Reasoning with Provided Visual Aids)
task across complexity levels.
-----
Figure 20: Accuracies of LLMs and LMMs upon pCQ2A (General Text-Only Reasoning) task across
complexity levels.
Figure 21: Accuracies of LLMs and LMMs upon pCQ2VA (Visual-Aided Reasoning) task across
complexity levels.
-----
Figure 22: Accuracies of LLMs and LMMs upon pCQpV2A (Reasoning with Provided Visual Aids)
task across complexity levels.
-----
**G.2.3** **Performance across Visual Aids**
With Image as input across different visual aids required to generate, the accuracy scores of mainstream LMMs under three tasks are listed in Figure 23, 24, and 25. GPT-4V outperforms other
models on problems with visual aids “auxiliary line” and “function graph” across all three tasks. In
text-only reasoning task, GPT-4V achieves leading performance in text-only reasoning except for
problems with rectangular three-dimensional coordinate system as visual aids. After introduce visual
aids, Gemini-Pro-Vision significantly outperform other models on problems with solid geometry
graph and three-dimensional rectangular coordinate system as visual aids. Gemini-Pro-Vision also
achieves comparable result to GPT-4 in problem solving provided with auxiliary line. Comparing
generated and provided visual-aids, overall average accuracy are enhanced saliently on “auxiliary
line”, “plane coordinate system”, and “function graph”, exhibiting higher sensitivity in reasoning
towards these visual aids.
Figure 23: Accuracies of LLMs and LMMs upon CQ2A (General Text-Only Reasoning) task across
visual aids.
Figure 26, 27, and 28 demonstrate accuracy scores of LLMs and LMMs on three tasks with image
caption as input across visual aids. GPT-4 achieves outstanding scores compared to other models.
With golden visual aids provided, GPT-4V attains higher accuracy on problem with “solid geometry
graph” as visual aids, and Gemini-Pro-Vision well-perform on reasoning with aided “rectangular
three-dimensional coordinate system”. Overall scores on “plane geometry graph” exhibit substantial
difficulty to employ implicit or explicit information within plane geometry graphs.
-----
Figure 24: Accuracies of LLMs and LMMs upon CQ2VA (Visual-Aided Reasoning) task across
visual aids.
Figure 25: Accuracies of LLMs and LMMs upon CQpV2A (Reasoning with Provided Visual Aids)
task across visual aids.
-----
Figure 26: Accuracies of LLMs and LMMs upon pCQ2A (General Text-Only Reasoning) task across
visual aids.
Figure 27: Accuracies of LLMs and LMMs upon pCQ2VA (Visual-Aided Reasoning) task across
visual aids.
-----
Figure 28: Accuracies of LLMs and LMMs upon pCQpV2A (Reasoning with Provided Visual Aids)
task across visual aids.
-----
**G.3** **N-gram Similarities**
We report three n-gram similarities in experiments to fine-grained reveal model capability. For clarity,
we first define hAid as the generated visual aids from visual-aided generation task. tAid is defined
as the golden visual aids. dReas demonstrates the direct reasoning output from general reasoning
task. The three similarity scores include 1) ShAid´dReas: overall n-gram similarity between hAid
and dReas 2) ShAid´tAid: target-only n-gram similarity between hAid and tAid 3) SdReas´tAid
target-only n-gram similarity between dReas and tAid. To attain these similarity scores, we first need
to count each n-gram size for specific text:
_NG “ tCountpng1q, Countpng2q, ..., Countpngmqu_ (1)
_NGclip calculates the intersecton part of NGi and NGj, demonstrating explicit n-gram matches._
_NGbase exhibits essential n-grams to be matched, target-only similarity select NGi as NGbase to_
emphasize match of target n-grams, while overall similarity leverage union of NGi and NGj as the
denominator for similarity calculation.
_NGclip “ NGi X NGj_ (2)
_NGbase “_ " _NGNGii Y NGj_ if target-only similarityif overall similarity (3)
Then,
_S “ [|][NG][clip][|]_ (4)
|NGbase|
Qualitative results 4.1 have shown the best performing GPT4 seldom generates visual aids to assist
problem solving, which also confirms with quantitative results in Table 25. Since models achieve low
_SdReas´tAid and tend to solve mathematical problems without visual aids, it becomes necessary to_
generate visual aids far from dReas in visual-aided reasoning task. Model with higher ShAid´dReas
can easily fail in visual aids inference.
-----
### Model ShAid´dReas ShAid´tAid SdReas´tAid
Large Language Models (LLMs): Text-Only Input
Llama2-7B 14.73 5.26 3.04 Mistral-7b-Instruct-v0.2 57.21 5.84 4.88 GPT3.5 10.13 4.36 4.29 GPT4 2.37 4.21 3.47
Large Multimodal Models (LMMs): Text-Only Input
LLaVA-Next-Mistral-7B 29.59 1.96 4.09 InternLM-XComposer2-VL 76.02 4.88 4.84 Qwen-VL-Plus 11.03 1.89 0.85 GeminiPro-vision 7.35 5.37 3.11 Claude-3-Sonnet 2.37 4.66 2.93 GPT4V 1.52 4.03 3.00
Large Multimodal Models (LMMs): Multimodal Input
LLaVA-Next-Mistral-7B 37.43 1.99 4.04 InternLM-XComposer2-VL 61.43 4.82 4.73 Qwen-VL-Plus 13.71 1.97 0.95 GeminiPro-vision 6.93 5.48 3.08 Claude-3-Sonnet 2.26 4.61 2.95 GPT4V 1.91 3.98 3.03
Table 25: Three types of similarity scores reported across different modles and modalities.
-----
#### H In-depth Model Analysis
**H.1** **General Reasoning Tendency**
**ID: 625**
**Question: As shown in the figure, the lateral prongs of the square triangular pheasant SABC are**
equal to the lengths of the sides of the base, and if E, F is the midpoint of SC, AB, then the angle between
_EF and SA is equal to ( )._
**Choices: A. 90[˝]** B. 60[˝] C. 45[˝] D. 30[˝]
Table 26: Example of GPT4V solve mathematical problem by creating visual aids to reveal implicit
visual information without instructed.
**ID: 466**
?
**Question: Given that the plane vector ⃗a,[⃗]b,⃗c satisfies |⃗a| “** 2, |[⃗]b| “ 1,⃗a ¨ _[⃗]b “ ´1, and the an-_
gle between ⃗a ´ ⃗c and _[⃗]b ´ ⃗c is_ _[π]4_ [, then the maximum value of][ |][⃗c][|][ is"]
? ? ? ?
**Choices: A.** 10B. 6C. 5D. 2
**Reasoning: To solve the problem, we need to analyze the conditions given and deduce the maximum value**
?
of |⃗c|. **Given Conditions:** 1. |⃗a| “ 2 2. |[⃗]b| “ 1 3. ⃗a ¨ _[⃗]b “ ´1 4. Angle between ⃗a ´ ⃗c and_ _[⃗]b ´ ⃗c is_
_π_
4 [**Reasoning:** - From][ ⃗a][ ¨] _[⃗b][ “ ´][1][, we use the formula for dot product][ ⃗a][ ¨]_ _[⃗b][ “ |][⃗a][||][⃗b][|][ cos][ θ][, which leads]_
?
to 2 ¨ 1 ¨ cos θ “ ´1. Hence, cos θ “ ´ ?[1]2 [and][ θ][ “][ 135][˝][. - For vectors][ ⃗u][ “][ ⃗a][ ´][ ⃗c][ and][ ⃗v][ “][ ⃗b][ ´][ ⃗c][,]
since the angle between them is _[π]4_ [, we can write][ cos] ` π4 ˘ “ [p][⃗a]|⃗a[´]´[⃗c]⃗c[q¨p]||[⃗]b[⃗b]´[´]⃗c[⃗c]|[q] [, leading to] ?12 [“][ ⃗a][¨][⃗b][´]|⃗a[⃗a]´[¨][⃗c]⃗c[´]||[⃗]b[⃗b]´[¨][⃗c]⃗c[`]| _[⃗c][¨][⃗c]_ .
1
Substituting ⃗a ¨ _[⃗]b “ ´1 yields_ ?2 [“][ ´][1][´]|⃗a[⃗a]´[¨][⃗c]⃗c[´]||[⃗b][⃗]b[¨]´[⃗c][`|]⃗c| _[⃗c][|][2]_ . - We seek the maximum |⃗c|. From the properties of
vector norms and dot products, |⃗c| can be expressed in terms of ⃗a ¨ ⃗c and _[⃗]b ¨ ⃗c, leading to a quadratic relation_
in terms of |⃗c| when resolved. After careful calculation and simplifying the relations to find the values that
allow |⃗c| to be maximized, considering the constraints, we seek the largest possible value from the choices
given. **Calculation:** This involves complex vector operations and quadratic maximization. Substituting
for ⃗a ¨ _⃗c and_ _[⃗]b ¨_ _⃗c, you would typically reach a quadratic in |⃗c|[2]. After performing the necessary algebra (which_
might require numerical computation or more detailed algebra than provided here), you would conclude the
following: **Answer:** The maximum value of |⃗c| that satisfies all given conditions, including the vector
?
lengths and dot products, is best aligned with 10 given the potential values calculated. Hence, the correct
?
choice is: **A.** 10
Table 27: Example of GPT4V solve mathematical problem by performing text-only reasoning to
achieve correct result.
-----
**ID: 275**
**Question: As shown in the figure, in the rectangle ABCD ´ A1B1C1D1, the points E, F, G are**
on the prongs A1A, A1B1, A1D1, A1E “ A1F “ A1G “ 1 ; the points P, Q, R are on the prongs
_CC1, CD, CB, CP “ CQ “ CR “ 1 .judge if the plane EFG{{ is in the plane PQR ._
**Reasoning: - Instruction: Please read the following math problem and the related image. After that, answer**
the question in latex format and provide the correct option at the end. Make sure the option answer can only
be word ’True’ or ’False’. - Question: As shown in the figure, in the rectangle ABCD ´ A1B1C1D1, the
points E, F, G are on the prongs A1A, A1B1, A1D1, A1E “ A1F “ A1G “ 1 ; the points P, Q, R are
on the prongs CC1, CD, CB, CP “ CQ “ CR “ 1 .judge if the plane EFG{{ is in the plane PQR . Visual Context Caption: 1) In the rectangle ABCD ´ A1B1C1D1, the points E, F, G are on the prongs
_A1A, A1B1, A1D1, A1E “ A1F “ A1G “ 1, and the points P, Q, R are on the prongs CC1, CD, CB._
_CP “ CQ “ CR “ 1, connecting GEF, PQR. - Reasoning: The planes EFG and PQR would be_
parallel if the lines EF, FG, GE are parallel to the corresponding lines PQ, QR, RP respectively. Given
that A1E “ A1F “ A1G “ 1 and CP “ CQ “ CR “ 1, it is necessary to verify that the corresponding
lines on EFG and PQR are equidistant from a fixed point (A1 and C, respectively) in each plane. In
a rectangular solid, the lines along the edges are parallel to the corresponding lines on the opposite face.
Since A1A ∥ _CP_, A1B1 ∥ _CQ, and A1D1 ∥_ _CR, by transitivity, if A1E, A1F, A1G are parallel to_
_AE, B1F, D1G respectively, and the same applies to CP, CQ, CR, then plane EFG should be parallel to_
plane PQR. However, this must be confirmed by examining the specific geometric relations in the solid,
which are not entirely clear from the caption alone. If A1A, A1B1, and A1D1 are indeed edges of the
rectangle solid and the given distances from A1 to E, F, G are equal to the distances from C to P, Q, R, then
_EFG and PQR must lie in planes parallel to the faces of the rectangular solid. Hence, they would be parallel_
to each other. - Answer: True
Table 28: Example of GPT4V solve mathematical problem by backward reasoning from potential
solutions.
-----
Table 29: Example of GPT4V solve mathematical problem by conducting arithmetic calculation.
-----
Table 30: Example of GPT4V provide correct answer due to hallucination.
-----
**H.2** **Visual Aid Inference Capability**
**ID: 115**
**Question: Which of the following is equal to the area of the region inside the polar curve r**
outside the polar curve r “ cos θ ?
**Choices: A. 3** ş0 π2 [cos][2][ θdθ][ B.][ 3] ş0π [cos][2][ θdθ][ C.][ 3]2 ş0 π2 [cos][2][ θdθ][ D.][ 3] ş0 π2 [cos][ θdθ][ E.][ 3] ş0π [cos][ θdθ]
**Ground Truth Visual Aids: Draw images of polar curves r “ 2 cos θ and r “ cos θ**
**Output Visual Aids: Create a polar coordinate system with the origin at the center.**
pr “ 2 cos θq to visualize the outer boundary of the region. This curve creates a circle with a radius of 2 and
centered at ((1, 0)). Next, plot the polar curve pr “ cos θq, which creates a smaller circle with a radius of 1,
also centered at ((0.5, 0)). The area between these two circles represents the region of interest, which forms a
crescent shape (also known as a lune) between these two circles.
**Reasoning: To find the area inside the outer curve and outside the inner curve, consider the area**
of each individual circle and then compute the area difference. The area of the larger circle is given by: [
1 ş2π ş2π ş2π
by: [outside the smaller circle can be calculated as: [2 0 [p][1]2[2 cos]ş02π[ θ][p][cos][q][2][dθ][ θ][q][ “][2][dθ][ 1]2[ “]0 12[4 cos]ş02π [cos][2][ θdθ][2][ θdθ.][ “][ 2][ ] Therefore, the area of the region inside the larger circle and]0 [cos] 2 ş0[2]2[ θdθ.]π [cos][ ] Similarly, the area of the smaller circle is given][2][ θdθ][ ´][ 1]2 ş02π [cos][2][ θdθ][ “][ 3]2 ş02π [cos]
the question specifies the limits between 0 and pπq, the correct answer should be derived from this integral
over half the circle (from 0 to pπqq. By comparing the given choices, the correct answer must be equivalent
to: [ [3]2 ş0π [cos][2][ θdθ.][ ] Therefore, the correct option is **B**.]
Table 31: Example of GPT4V generate visual aids correct in semantics.
**ID: 14**
**Question: As shown in the figure, AB is the diameter of dO, P is any point on the extension line of AB,**
_C is the midpoint of the semicircle ACB, PD cuts dO at point D, and connects CD to AB at point E ._
_PD “ PE judge if PE[2]_ “ 2PA ¨ PB
**Ground Truth Visual Aids: Connection AD, BD, Connection OC, OD**
**Output Visual Aids: Draw the line PA extended from the diameter AB of dO, and the segment PC**
passing through the midpoint C of the semicircle ACB. Mark the point D where PD intersects dO, and
draw the segment CD intersecting AB at point E. Mark the point P on the extended line AB beyond A,
where PD “ PE.
**Reasoning: Given that PD “ PE and C is the midpoint of AB, by the Power of a Point Theo-**
rem, PE[2] “ PA ¨ PB. Since C is the midpoint of the semicircle, AC “ CB “ r (radius of dO). Thus,
_ACB forms a right angle at C, and AB is the hypotenuse with length 2r. Also, PD is a chord of dO passing_
through D, making CD a perpendicular bisector of PD, therefore, DE “ 1{2PD. Since PD “ PE, we
have DE “ 1{2PE. By applying the Pythagorean Theorem in △PDE, we have PE[2] “ PD[2] ` DE[2],
which after substituting DE with 1{2PE gives PE[2] “ PD[2] ` p1{2PEq[2]. After simplifying, we get
_PE[2]_ “ 2PD[2], or PE[2] “ 2PA ¨ PB. Thus, option ’True’ is correct.
Table 32: Example of GPT4V fail to generate visual aids due to task misunderstanding. Model
replicate.
-----
Table 33: Example of GPT4V fail to correctly generate visual aids due to input conflicting hallucination
-----
Table 34: Example of GPT4V fail to correctly generate visual aids due to fact conflicting hallucination
-----
Table 35: Example of GPT4V fail to correctly generate visual aids due to context conflicting
hallucination
-----
**ID: 1023**
**Question: As shown in the figure, in the quadrangular pheasant S ´ ABCD, SD K the base**
_ABCD, AB{{DC, AD K DC, AB “ AD “ 1, DC “ SD “ 2, E is a point on the prong SB, the_
plane EDC K the plane SBC.determine : SE “ 3EB
**Ground Truth Visual Aids:** Connect BD, take the midpoint Gof DC, connect BG, make
_BK K EC, whereKis perpendicular, because plane EDC Kis plane SBC_
Table 36: Example of GPT4V generate different visual aids for alternative substantial reasoning path
and provide correct final result
-----
**ID: 1060**
**Question: Knowing that there is a point P outside the plane where the convex quadrilateral ABCD is located,**
and knowing that E, F, G, H, M, N are the midpoints of AB, PC, AD, BC, EF, and GH respectively, then
**Choices:** A. P, D, M, N four points are coplanar, and PD “ 4MN B. P, D, M, N The four
points are not coplanar, and PD “ 4MN C. P, D, M, N four points are coplanar, and PD ‰ 4MN D.
_P, D, M, N The four points are not coplanar, and PD ‰ 4MN_
**Ground Truth Visual Aids:** Taking the midpoint Q of C D and concatenating the midpoints of
each side of a quadrilateral A B C D in the same order yields a parallelogram G E H Q and concatenates Q E
By dividing the diagonals of a parallelogram equally, we know that QEpasses through the midpoint N of
_GH, and with N_ as the midpoint, we connect FQ,
**Output Visual Aids: ** Create a three-dimensional sketch that places point P in space relative to the**
convex quadrilateral ABCD. Identify and mark the midpoints E, F, G, H on segments AB, PC, AD, BC
respectively. Then draw segments EF and GH and mark their midpoints M and N . Draw a line connecting
_P to D. Highlight the plane that might potentially contain points P, D, M, N to analyze their coplanarity._
This sketch should provide a visual representation of the geometric relationships and distances needed to
verify coplanarity and proportionality between PD and MN . **
**Reasoning: ** The midpoints E, F, G, H divide each segment into two equal parts.** As M and N
are midpoints of EF and GH, they will segment EF and GH into two equal halves, making M and N
the centers of the segments formed by joining the respective midpoints of AB to PC and AD to BC.
Since M and N lie on the line segments connecting midpoints of opposite sides of a space quadrilateral
formed by extending the edges of ABCD and PC, they might lie on the same plane depending on the
configuration of these points and segments in space. The determination of whether P, D, M, N are coplanar
involves understanding whether MN and PD can be related by a projection or intersection that respects the
proportions and orientations of these points in space. Given the information, since M and N are derived
from averaging points on edges linked to P, they should align with P geometrically. The specific condition
about the proportion of PD and MN (e.g., PD “ 4MN ) suggests a scaling between the lengths that
typically would not hold in this configuration unless specifically constructed that way. Given that MN
is essentially the midpoint line segment connecting the midpoints of line segments across the plane, and
without additional information to contradict it, one would normally conclude that the relative distances and
alignments make it unlikely for PD to be exactly four times MN under normal geometric conditions unless
additional relationships or alignments are specified. **Answer Choice:** The analysis leads us to option
**D**: P, D, M, N The four points are not coplanar, and PD ‰ 4MN . This is because, without additional
geometric constraints or alignments explicitly stated that could enforce a direct proportional relationship
or coplanar alignment, the general geometric arrangement of such points does not typically satisfy both
coplanarity and the specified proportionality between PD and MN .
Table 37: Example of GPT4V generate different visual aids for alternative substantial reasoning path
and provide wrong final result
-----
| [
"Jingkun, Ma",
"Runzhe, Zhan",
"Di, Sun",
"Derek F., Wong",
"Hou Pong, Chan",
"Lidia S., Chao",
"Yang, Li"
] | 2024-10-30T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2410.22995 | https://arxiv.org/abs/2410.22995 | https://www.semanticscholar.org/paper/684ee597c1c611c95e69002bf2d141d2b06b1de5 |
VisScience: An Extensive Benchmark for Evaluating K12 Educational Multi-modal Scientific Reasoning | Multi-modal large language models (MLLMs) have demonstrated promising capabilities across various tasks by integrating textual and visual information to achieve visual understanding in complex scenarios. Despite the availability of several benchmarks aims to evaluating MLLMs in tasks from visual question answering to complex problem-solving, most focus predominantly on mathematics or general visual understanding tasks. This reveals a critical gap in current benchmarks, which often overlook the inclusion of other key scientific disciplines such as physics and chemistry. To address this gap, we meticulously construct a comprehensive benchmark, named VisScience, which is utilized to assess the multi-modal scientific reasoning across the three disciplines of mathematics, physics, and chemistry. This benchmark comprises 3,000 questions drawn from K12 education - spanning elementary school through high school - equally distributed across three disciplines, with 1,000 questions per discipline. The questions within VisScience span 21 distinct subjects and are categorized into five difficulty levels, offering a broad spectrum of topics within each discipline. With VisScience, we present a detailed evaluation of the performance of 25 representative MLLMs in scientific reasoning. Experimental results demonstrate that closed-source MLLMs generally outperform open-source models. The best performance observed include a 53.4\% accuracy in mathematics by Claude3.5-Sonnet, 38.2\% in physics by GPT-4o, and 47.0\% in chemistry by Gemini-1.5-Pro. These results underscore the strengths and limitations of MLLMs, suggesting areas for future improvement and highlighting the importance of developing models that can effectively handle the diverse demands of multi-modal scientific reasoning. | A comprehensive benchmark is constructed, named VisScience, which is utilized to assess the multi-modal scientific reasoning across the three disciplines of mathematics, physics, and chemistry and demonstrates that closed-source MLLMs generally outperform open-source models. | ## VisScience: An Extensive Benchmark for Evaluating K12 Educational Multi-modal Scientific Reasoning
**Zhihuan Jiang[13][†∗], Zhen Yang[13][†∗], Jinhao Chen[23][†], Zhengxiao Du[13], Weihan Wang[13],**
**Bin Xu[1], Yuxiao Dong[1], Jie Tang[1]**
1Tsinghua University 2Beihang University 3Zhipu.AI
**Abstract**
Multi-modal large language models (MLLMs) have demonstrated promising capabilities across various tasks by integrating textual and visual information to achieve
visual understanding in complex scenarios. Despite the availability of several
benchmarks aims to evaluating MLLMs in tasks from visual question answering to
complex problem-solving, most focus predominantly on mathematics or general
visual understanding tasks. This reveals a critical gap in current benchmarks, which
often overlook the inclusion of other key scientific disciplines such as physics and
chemistry. To address this gap, we meticulously construct a comprehensive benchmark, named VisScience, which is utilized to assess the multi-modal scientific
reasoning across the three disciplines of mathematics, physics, and chemistry. This
benchmark comprises 3,000 questions drawn from K12 education — spanning
elementary school through high school — equally distributed across three disciplines, with 1,000 questions per discipline. The questions within VisScience span
21 distinct subjects and are categorized into five difficulty levels, offering a broad
spectrum of topics within each discipline. With VisScience, we present a detailed
evaluation of the performance of 25 representative MLLMs in scientific reasoning.
Experimental results demonstrate that closed-source MLLMs generally outperform
open-source models. The best performance observed include a 53.4% accuracy
in mathematics by Claude3.5-Sonnet, 38.2% in physics by GPT-4o, and 47.0% in
chemistry by Gemini-1.5-Pro. These results underscore the strengths and limitations of MLLMs, suggesting areas for future improvement and highlighting the
importance of developing models that can effectively handle the diverse demands
of multi-modal scientific reasoning.
**1** **Introduction**
Recently, large language models (LLMs) [38, 1, 25, 43, 44, 7, 9, 18, 3] have demonstrated remarkable
capabilities across a wide range of tasks, including natural language understanding, text generation,
and complex problem solving. The success of LLMs facilitates the development of multi-modal
large language models (MLLMs) [39, 42, 6, 32, 31, 48, 49], which extends these capabilities by
integrating the ability to process and analyze both textual and visual information. Evaluation is
a significant component in assessing the ability of these MLLMs across various tasks, which has
attracted widespread attention and developed rapidly in recent years. For instance, several benchmark
datasets are proposed to evaluate the ability of MLLMs in general visual understanding, including
MME [22], MMMU [50], MMBench [33], MMStar [14], and SEED-Bench [28].
*ZHJ and ZY contributed equally. Emails: {jiang-zh21, yangz21}@mails.tsinghua.edu.cn
†Work done while Zhihuan Jiang, Zhen Yang and Jinhao Chen interned at Zhipu AI.
Preprint. Under review.
-----
GPT-4o
GPT-4o-mini
Claude-3.5-Sonnet
Claude-3-Opus
Gemini-1.5-pro
GLM-4V
InternVL-1.2-Plus
InternVL-Chat-V1.5
PlaneG
80
SolidG80
Comb
80Stat
Fun
80
|64 48 64 32 48 32 16 16 000 16 32 48 64 80|80 64 48 32 16 000 16 16 32 48 32 64 48 64|
|---|---|
Alg
Comprehensive
Experiments and Methods
70 GPT-4o Gemini-1.5-pro
GPT-4o-mini InternVL-Chat-V1.5
Claude3.5-Sonnet InternVL-1.2-Plus
60 Claude3-Opus
50
Scores40
30
20
10
0 Level 1 Level 2 Level 3 Level 4 Level 5
(a) Mathematics
GPT-4o
GPT-4o-mini
Claude-3.5-Sonnet
Claude-3-Opus
Gemini-1.5-pro
GLM-4V
Electromagnetism
InternVL-1.2-Plus
InternVL-Chat-V1.5
GPT-4o Gemini-1.5-pro
50 GPT-4o-mini InternVL-Chat-V1.5
Claude3.5-Sonnet InternVL-1.2-Plus
Claude3-Opus
40
Scores30
20
10
0 Level 1 Level 2 Level 3 Level 4 Level 5
(b) Physics
Thermo
Modern physics
|80|Col2|
|---|---|
|netism 64 80 64 48 48 32 32 16 16 00|Mecha 80 Motion 64 48 32 16 0 16 32 48 64 8|
|0 64 48 32 16 000 16 32 48 64 hanics 80|00 16 16 32 32 48 48 64 80Vibrati 64|
Chemical
Reaction 80
Vibration and Waves
Substance
Composition
80
Optics
GPT-4o
GPT-4o-mini
Claude-3.5-Sonnet
Chemical Experiment
Claude-3-Opus
Gemini-1.5-pro
Qwen-VL-Max
InternVL-1.2-Plus
InternLM-XC2
Inorganic
Chemistry
80 Chemical
Equilibrium
Electrochemistry 80 80Organic Chemistry
80
64
80 48 80
64 64
48 32 48
32 16 16 16 32
80 64 48 32 16 16000 0000 16 16 32 48 64
32 32
48 48
64 64
80
70
GPT-4o Gemini-1.5-pro
GPT-4o-mini InternVL-Chat-V1.5
60 Claude3.5-Sonnet InternVL-1.2-Plus
Claude3-Opus
50
40
Scores
30
20
10
0 Level 1 Level 2 Level 3 Level 4 Level 5
(c) Chemistry
Figure 1: The accuracies of representative MLLMs on VisScience across different subjects and
difficulty levels. (Left) The accuracies on different subjects. (Right) The accuracies on various
difficulty levels.
-----
As a primary evaluation domain, mathematical reasoning presents specific challenges, requiring
models to handle complex mathematical problems accompanied by visual information. Previous
works [11, 12, 10] focus on geometric problems, resulting in the emergence of various evaluation
datasets such as GeoQA [11], Geometry3K [35], and UniGeo [12]. Subsequently, several benchmark
datasets [37, 53, 45] extend the scope of mathematical reasoning beyond geometry to encompass
various branches such as arithmetic, algebraic, statistics, logic, and functions. Notably, MathVista
also contains a portion of scientific datasets such as TQA [27], SciBench [47], and ScienceQA [36].
However, despite these advancements, there remains some issues:
- Existing benchmarks often focus narrowly on specific mathematics, neglecting other crucial
_scientific disciplines like physics and chemistry._
- Existing benchmarks are often collected from limited sources, resulting in a lack of natural
_difficulty levels and leading to an incomplete evaluation of models’ capabilities._
- Current benchmarks are predominantly available in a single language, limiting the evalua_tion of MLLMs’ multilingual capabilities._
To address the limitations of existing benchmarks and provide a more comprehensive evaluation
benchmark, we introduce a more expansive evaluation benchmark, named VisScience, integrating
both textual and visual information. This benchmark is designed to assess the performance of
MLLMs in multi-modal scientific reasoning tasks across disciplines like physics and chemistry
alongside mathematics. To construct this benchmark, we gather a total of 450,000 questions from
K12 education and meticulously select 3,000 questions as the final dataset, with each discipline
containing 1,000 questions. This benchmark spans a comprehensive range of knowledge points across
different chapters, with difficulty levels ranging from 1 to 5, ensuring that models are assessed on
both basic and challenging problems.
In order to better understand MLLMs’ performance on more detailed subjects within three disciplines,
we categorize VisScience into several subjects across each discipline. Specifically, we divide the
mathematical part of VisScience into six subjects such as plane geometry, solid geometry, functions
_and equations, algebraic operations, probability and statistics, and combinatorial mathematics. For_
physics, the dataset is categorized as eight subjects, including mechanics, thermodynamics, compre_hensive experiments and methods, mechanical motion, vibration and waves, optics, electromagnetism,_
_and modern physics. The chemistry section includes seven topics such as chemical experiments,_
_organic chemistry, material composition, electrochemistry, chemical reactions, inorganic chemistry,_
_and chemical equilibrium. In summary, VisScience contains 21 subjects across the three disciplines_
of mathematics, physics, and chemistry.
We conduct extensive experiments on VisScience to evaluate the scientific reasoning abilities of 25
representative MLLMs. These models include close-source LLMs, close-source and open-source
MLLMs, offering a comprehensive analysis of their performance across various disciplines. As
illustrated in Figure 1, the best performance is observed in close-source MLLMs, with distinct models
excelling in different disciplines. In specific, Claude3.5-Sonnet achieves an accuracy of 53.4% in
mathematics, GPT-4o reaches a 38.2% accuracy in physics, and Gemini-1.5-Pro records an accuracy
of 47.0% in chemistry. Among open-source models, InternVL-1.2-Plus performs best with accuracies
of 30.1% in mathematics, 24.8% in physics, and 31.2% in chemistry. Lastly, we systematically
analyze the errors made by advanced models like GPT-4o on VisScience, which providing valuable
insights into the specific domains where these models excel and where they struggle.
**2** **VisScience Dataset**
In this section, we first illustrate the overview of our specially curated VisScience benchmark,
designed to assess the capabilities of MLLMs in multi-modal scientific reasoning. Next, we introduce
data generation process, which encompasses three core scientific disciplines: mathematics, physics,
and chemistry. Lastly, we discuss the difference between our VisScience benchmark and existing
benchmarks.
-----
**2.1** **Overview**
We introduce the VisScience benchmark, a meticulously curated collection aimed at evaluating the
capabilities of multi-modal large language models (MLLMs) in multi-modal scientific reasoning, with
a particular focus on bilingual tasks involving both English and Chinese. This dataset incorporates
textual and visual contexts as inputs and spans three scientific disciplines, including mathematics,
physics, and chemistry. Each discipline comprises 1,000 questions, meticulously gathered from
different chapters to ensure comprehensive coverage of topics and concepts. The core statistics of the
VisScience benchmark are presented in Table 1. The distributions of question length in VisScience
are provided in Appendix A.1.
In mathematics, the dataset can be divided into six key areas: plane geometry, solid geometry,
functions and equations, algebraic operations, probability and statistics, and combinatorial mathematics. The physical component of the VisScience benchmark encompasses eight subjects, including
mechanics, thermodynamics, comprehensive experiments and methods, mechanical motion, vibration
and waves, optics, electromagnetism, and modern physics. The chemistry section of the VisScience
benchmark includes seven topics such as chemical experiments, organic chemistry, substance composition, electrochemistry, chemical reactions, inorganic chemistry, and chemical equilibrium. A
detailed introduction of each subjects within the three disciplines is available in Appendix A.2.
The primary objective of the VisScience benchmark is to provide a rigorous and diverse benchmark
for assessing the multi-modal scientific reasoning capabilities of MLLMs. This benchmark aims to
supplement existing benchmarks that predominantly focus on mathematical reasoning by broadening
the scope to include expansive domains such as mathematics, physics, and chemistry. This benchmark
aims to supplement existing benchmarks that predominantly focus on mathematical reasoning by
broadening the scope to include expansive domains such as mathematics, physics, and chemistry.
Through this enhancement, VisScience seeks to provide a more holistic measure of MLLMs’ abilities
across a wider spectrum of scientific disciplines.
Statistic Number
Total questions 3000
- multiple-choice questions 2,053 (68.4%)
- Free-form questions 947 (31.6%)
Number of categories of math questions 6
Number of categories of physics questions 8
Number of categories of chemistry questions 7
Number of difficulty levels 5
Unique number of images 3,000
Unique number of questions 3,000
Unique number of answers 1,427
_Statistics with Chinese Language_
Maximum question length 1297
Maximum answer length 112
Maximum choice number 5
Average question length 162.85
Average answer length 20.93
_Statistics with English Language_
Maximum question length 418
Maximum answer length 92
Maximum choice number 5
Average question length 80.93
Average answer length 12.30
Table 1: Key statistics of VISSCIENCE.
-----
**_Question_** **_Math(Plane Geometry)_** **_Question_** **_Physics(Comprehensive Experiments and Methods)_** **_Question_** **_Organic Chemistry_**
In square ABCD, point P is a moving point on AB (not coinciding with A or B). Diagonals AC and BD intersect at point O. Perpendiculars are drawn from point P to AC and BD, intersecting AC and BD at points E and F respectively, and intersecting AD and BC at points M and N respectively. The following conclusions are given:①②③④ PM + PN = AC; PE² + PF² = PO²;△△APEPOF∽≌△△BNF;AME; Use the circuit diagram shown in Figure A to study the volt-ampere characteristics of bulb L (2.4V, 1.0W) and measure the resistance value of the bulb when it is operating normally at its rated voltage, verifying the accuracy of its label. (1) Before closing switch S, the slider on the variable resistor should be placed at the ▲ end. (Select "a" or "b") (2) The main reason for the curve in the U—I graph made after the experiment, as The mascot for the 2008 Olympics, Fuwa, has an outer material made of pure wool and is filled with non-toxic polyester fiber (as shown in the figure). Which of the following statements is correct? ( )
⑤How many of these conclusions are correct? When △PMN∽△AMP, point P is the midpoint of AB. **_Answer:D_** shown in Figure B, is: ▲. A. The chemical composition of wool and polyester fiber is the same.
A. 2B. 3C. 4D. 5 **_Answerresistance of the filament increases.:(1) a (2) When the temperature rises, the_** B. Polyester fiber and wool can both hydrolyze under certain conditions. C. The monomers of this polyester fiber are terephthalic acid and ethanol. **_Answer:B_**
D. Polyester fiber is a pure substance.
A mall is running a promotion on a certain product. The cost price of each unit is 3 yuan, and the selling price is 8 yuan. Question **_Math(Functions and Equations)_** **_Question_** **_Physics(Mechanics)_** **_QuestionThe spatial structure of [CO(NH3)6]3+ is Inorganic Chemistry_**
Starting from the 20th unit sold each day, the price is halved. As shown in the figure, the stiffness of the light spring shown in the figure, where the small circles
The mall has recorded the sales of this product over the past is k, the ball's weight is G, and the ball is at position A with numbers represent NH3 molecules, and
10 days, as shown in the figure. Let x be the daily sales in equilibrium. Now, applying a force F to compress the the distances between adjacent NH3 molecules
volume, and y be the daily profit from the sale of this product. ball to position B, which shortens the spring by x, the are equal. The CO3- ion is located at the center
Among the days where the daily profit is no less than 96 yuan, spring force at this time is ( ) of the octahedron. If two of the NH3 molecules
if we randomly select 2 days, the probability that the daily A: kx are replaced by Cl, the resulting
profit on both selected days is exactly 97 yuan isA. 1/9B. 1/10C. 1/5 **_Answer:B_** B: kxC: GD: None of the above-+kx G **_Answer:B_** [CO(NH3)4Cl2]- will have ( ) A: 2 typesB: 3 typesC: 4 types **_Answer:A_**
D. 1/8 D: 5 types
Figure 2: Examples of the VisScience benchmark comprising three disciplines: mathematics, physics,
and chemistry.
**2.2** **Data Generation**
The goal of the VisScience benchmark is to establish a comprehensive, bilingual (Chinese and
English) benchmark for evaluating the capabilities of MLLMs in processing and understanding
complex, scientifically-oriented tasks across various disciplines. In order to achieve this goal, we
present a two-stage data generation pipeline to meticulously construct a benchmark dataset comprising
3,000 questions, evenly distributed with 1,000 questions each in the fields of mathematics, physics,
and chemistry. Figure 2 shows some examples sampled from the VisScience benchmark across
three disciplines: mathematics, physics, and chemistry. More cases in VisScience are provided in
Appendix B.
**Data Collection.** We gather a total of 450,000 questions from the disciplines of mathematics,
physics, and chemistry, each enriched with visual information sourced from K12 education. This
collection spans a comprehensive range of knowledge points across different chapters, with the
difficulty levels scaled based on education grade. Consequently, we cluster 150,000 questions per
discipline and carefully select 1,000 representative questions. These questions exemplify a range of
difficulty levels and a variety of subjects, guided by the following principles:
- Guaranteeing every knowledge point is included in VisScience benchmark.
- Prioritizing the selection of questions from high-frequency knowledge points.
- Ensuring a mixture of questions across various difficulty levels.
In the end, the VisScience benchmark is constructed with 3,000 questions, with each of the three
disciplines – mathematics, physics, and chemistry – contributing 1,000 questions. This approach
ensures that the benchmark comprehensively covers a wide array of topics within each discipline,
reflecting the breadth and depth required for a thorough assessment of MLLMs’ capabilities.
**Data Annotation. To improve the quality of the VisScience benchmark, we conduct multiple checks**
using both manual reviews and LLM assessments to confirm the completeness of each question. For
textual content, we check for accuracy, coherence and relevance, ensuring that each question aligns
with the corresponding scientific discipline and is free of ambiguities. For associated visual content,
we rigorously screen out images that are incorrect, unclear, or lacking in detail, retaining only those
that are clear and richly informative. To maintain the volume of the VisScience benchmark, we
compensate for questions removed due to incomplete information by selecting new questions on
identical topics from the original dataset. This approach ensures that the overall number of questions
and the breadth of content coverage are consistently maintained. This verification process guarantees
that both the textual and visual components of the VisScience benchmark is a reliable and effective
tool for evaluating the capabilities of MLLMs in scientific reasoning.
-----
**2.3** **Data Analysis**
We utilize statistical analysis to assess subject distributions and difficulty levels within the VisScience
benchmark. Figure 3 presents a visual representation of the categorization of question within the
VisScience benchmark. This illustration shows the distribution of questions dedicated to each subject
area – mathematics, physics, and chemistry – and details the distribution across various difficulty
levels, ranging from 1 to 5.
**level3** **Plane**
**Geometry**
**Total** **level1** **Algebraic**
**Operations**
**Solid**
**Geometry**
**level4** **Functions**
**and Equations**
**Probability**
**level2** **and Statistics**
**Combinatorial**
**level5** **Mathematics**
(a) Mathematics
Combinatorial
Mathematics
Solid Geometry
6.3%
10.9%
Plane
Geometry 43.6% 11.8% and Statistics Probability
12.0%
15.4%
Functions
and Equations
Algebraic Operations
Modern Physics
Mechanical Motion
Vibration and Waves
Thermodynamics
2.1%
3.0%
Mechanics 4.7%5.1% Optics
39.4% 6.1% Comprehensive
6.5% Experiments
and Methods
33.1%
Electromagnetism
**Mechanics**
**level2**
**Total** **level3** **Electromagnetism**
**Comprehensive**
**level4** **ExperimentsExperimentsand Methods**
**Optics**
**Thermodynamics**
**level1** **Vibration**
**and Waves**
**level5** **Mechanical Motion**
**Modern Physics**
(b) Physics
Chemical **level3** **Equilibrium**
**Chemical**
**Experiment**
**level2**
**Chemical**
**Reaction**
**Total** **level4** **Substance**
**Composition**
**Inorganic**
**Chemistry**
**level1** **Electrochemis**
**Chemical**
**level3** **Equilibrium**
**Organic**
**level5** **Chemistry**
(c) Chemistry
Chemical
Equilibrium
Electrochemistry
Chemical
Experiment
6.9%
29.6% 8.0%
Substance
Composition
12.3%
12.7%
17.6%
12.9% Organic
Chemistry
Chemical
Reaction
Inorganic Chemistry
Figure 3: The distribution of detailed subjects and difficulty levels in the each discipline within the
VisScience benchmark. (Left) The distributions of various subjects. (Right) The distributions of
difficulty levels.
**Subject Distributions. To categorize each discipline into more detailed subjects, we first utilize LLM**
to segment the overall discipline into specific topics based on knowledge points and terminologies
-----
presented in the questions. Subsequently, we conduct a manual review of these categories to confirm
its rationality and appropriateness, ensuring that each question is accurately categorized. As shown in
Figure 3, the mathematical part of the VisScience benchmark is divided into six subjects, i.e., plane
geometry (43.6%), algebraic operations (15.4%), functions and equations (12%), probability and
statistics (11.8%), solid geometry (10.9%), and combinatorial mathematics (6.3%). Furthermore,
the distributions for physics and chemistry disciplines are presented in the figure, providing a
comprehensive overview of the scope of the VisScience benchmark within these scientific fields.
**Difficulty Levels. To classify the questions into distinct difficulty levels, we first utilize LLM for**
the initial sorting, and then conduct a manual verification. The questions within each discipline are
stratified into five difficulty levels ranging from 1 to 5, defined as follows: Basic, Easy, Intermediate,
Advanced, and Expert. Figure 3 shows the distribution of difficulty levels, providing a visual
representation of the distribution of questions across different difficulty levels. Each discipline
demonstrates a unique profile of topic distribution across the difficulty levels. For instance, in the
field of mathematics, plane geometry is classified at the intermediate level, algebraic operations
are positioned at the basic level, and functions and equations appears at the highest difficulty level,
reflecting their various placement within educational curricula. In physics, mechanics dominates the
introductory level, which belongs to a fundamental concept in physics education. Electromagnet is
positioned at the highest difficulty level, demanding the application of various advanced knowledge
points. In the discipline of chemistry, organic chemistry and chemical equilibrium represent the
pinnacle of K12 chemical education, requiring deep conceptual understanding and the ability to apply
knowledge to complex scenarios.
**2.4** **Comparison with Other Benchmarks**
We compare the VisScience benchmark with 5 existing benchmarks, including MathVista [37],
Math-Vision [45], CMMMU [52], ScienceQA [36], and SciBench [47].
**VisScience vs MathVista. MathVista is a comprehensive multi-modal benchmark for mathematical**
reasoning, comprising data from 28 existing datasets and 3 newly collected datasets. In MathVista,
the majority of questions are annotated after collecting images, which results in a certain homogeneity
within the data. In contrast, VisScience directly collects its questions from K12 education, featuring
an average question length of 80.93 words. Such questions provide more contextual information,
which facilitate a more thorough evaluation of the models’ reasoning capabilities. Unlike MathVista
that encompasses only seven subjects within mathematics, VisScience offers a far broader scope,
including 22 distinct subjects across mathematics, physics, and chemistry. Furthermore, VisScience
distinguishes itself by being a bilingual benchmark, including both Chinese and English versions of
questions. This feature is particularly advantageous as it assesses MLLMs’ capabilities in scientific
reasoning across different languages.
**VisScience vs Math-Vision. Math-Vision is a mathematics benchmark derived from 19 competitions,**
covering 16 topics across 5 levels of difficulty. Different from Math-Vision that collected from
competitions, VisScience spans a broader educational spectrum, incorporating a natural gradient
of difficulty from elementary school to high school. Furthermore, VisScience extends beyond
mathematics to include questions from physics and chemistry, significantly broadening its scope
and applicability. While Math-Vision primarily focuses on the unique challenges of competitive
environments, VisScience is grounded in real-world educational settings.
**VisScience vs CMMMU. CMMMU comprises 12,000 manually collected multi-modal questions**
from university exams, quizzes, and textbooks, which covers 6 core subjects and 30 specific fields.
Similar to VisScience, CMMMU is a bilingual benchmark, offering questions in both Chinese and
English. Within this dataset, only 1,601 questions are dedicated to the disciplines of mathematics,
physics, and chemistry, accounting for only 13.34% of the total dataset. VisScience features a total
of 3,000 questions, significantly outnumbering those in CMMMU dedicated to the same subjects.
The questions in CMMMU are set at the university level, characterized by high difficulty, demanding
that the model possesses substantial professional domain knowledge and expert-level reasoning
abilities. In contrast, VisScience comes from K12 education, with a broader range of difficulty. This
range allows VisScience to more comprehensively evaluate MLLMs’ capabilities across different
educational stages.
-----
**VisScience vs ScienceQA. ScienceQA is a newly developed benchmark featuring approximately**
21,000 multimodal multiple-choice questions across a variety of science topics. In the ScienceQA
dataset, 30.8% of questions incorporate both image and text contexts, providing a multimodal
benchmark to test MLLMs in scientific reasoning. The questions in ScienceQA have an average
length of only 12.11 words. In contrast, VisScience also serves as a benchmark for evaluating
the scientific reasoning abilities of MLLMs, but it typically features longer and more textually
detailed questions. Specifically, the Chinese version of VisScience has an average question length
of 162.85 words, providing a more comprehensive and intricate testing ground for evaluating the
depth of detailed reasoning in MLLMs. Additionally, VisScience contains mathematical problems,
further enriching the benchmark’s scope by testing MLLMs on their mathematical problem solving
capabilities alongside their scientific reasoning.
**VisScience vs SciBench. SciBench is a benchmark developed to evaluate the reasoning capabilities**
of LLMs in solving collegiate-level scientific problems within the domains of mathematics, chemistry,
and physics. The majority of the data in SciBench focuses on assessing the scientific reasoning of
LLMs, it only includes 177 problems that incorporate visual elements to evaluate the performance of
MLLMs. In contrast, VisScience is primarily focused on multimodal scientific reasoning, covering
similar subjects such as mathematics, chemistry, and physics. VisScience differentiates itself by
offering a more comprehensive range of difficulty levels and subjects, making it a broader benchmark
for assessing the capabilities of MLLMs in scientific reasoning.
**3** **Experiments**
In this section, we conduct experiments to evaluate a variety of MLLMs using the VisScience
benchmark. The evaluation encompasses both close-source and open-source models, enabling a
comprehensive analysis of their effectiveness in scientific reasoning. Besides, we provide a detailed
error analysis of the advanced model GPT-4o.
**3.1** **Experimental Setup**
**Models.** We conduct our evaluation across a diverse array of models, including close-source
text-only large language models (LLMs), close-source multi-modal large language models (MLLMs),
and open-source MLLMs. This comprehensive assessment covers more than 20 models, which are
listed below. The sources of models is reported in Appendix C.1.
- Close-source text-only LLMs: ChatGPT [38], GPT-4 [1], Claude2 [4].
- Close-source MLLMs: Gemini-1.0-Pro [42], Gemini-1.5-Pro [42], GPT-4o [40], Qwen-VLMax [8], Qwen-VL-Plus [8], Claude3.5-Sonnet [5], Claude3-Opus [6], GLM-4V [2], and
Step-1V [41].
- Open-source MLLMs: mPLUG-Owl [49], LLaMA-Adapter-V2 [23], MiniCPM-Llama3V2.5 [26], LLaVA-1.5 [30], DeepSeek-VL [34], ShareGPT4V [13], SPHINX-Plus [24],
InternLM-XC2 [20], InternVL-1.2-Plus [15], InternVL-Chat-V1.5 [17], CogVLM [46],
CogVLM2 [46], and GLM-4V-9B [25].
**Evaluation Details.** The evaluation process is conducted through two steps: generation and
judgment. During the generation phase, the models are tasked with producing responses based on a
set of questions. For zero-shot setting, we directly prompt the models with these questions without
any examples. For 2-shot Chain of Thought (CoT) setting, we provide the models with two relevant
examples before they are prompted with the questions. For MLLMs, we supply the models with the
textual questions and the corresponding image to obtain their responses. During the judgment phase,
we utilize GPT-4o to evaluate the models’ responses by comparing them with the standard answers to
assess consistency. This phase involves calculating the accuracy across different subjects and levels.
The prompts used in two phases is defined in Appendix C.2.
-----
**3.2** **Experimental Results**
**Overall Results. Table 2 demonstrates the performance of several models on VisScience within the**
version of the Chinese language. Experimental results show that the close-source models achieves
best performance on VisScience. Specifically, Claude3.5-Sonnet achieves an accuracy of 53.4%
in mathematics, GPT-4o attains a 38.2% accuracy in physics, and Gemini-1.5-Pro accomplishes
an accuracy of 47.0% in chemistry. Among open-source models, InternVL-1.2-Plus stands out,
demonstrating robust capabilities across various scientific disciplines with accuracies of 30.1% in
mathematics, 24.8% in physics, and 31.2% in chemistry. Despite this, there is a notable disparity
in performance between close-source and open-source models, with close-source models generally
exhibiting superior performance. The performance of InternVL-1.2-Plus, although trailing behind
the advanced close-source models such as GPT-4o, Claude3.5-Sonnet, and Gemini-1.5-Pro, showing
significant potential for improvement. Notably, the performance in physics underscores unique
challenges that necessitate targeted improvements in model training. This discipline often involves
the interpretation of conceptual and numerical data, challenging the reasoning and computational
abilities of MLLMs. As evidenced in Table 2, even advanced models like GPT-4o achieve relatively
lower accuracies in physics compared to other disciplines. Results on VisScience with the version of
the English language are provided in Appendix D.1.
**Results on Mathematics Across Different Subjects.** The mathematical part of VisScience
encompasses a wide range of subjects, including plane geometry, solid geometry, functions and
equations, algebraic operations, probability and statistics, and combinatorial mathematics. Table 3
reports the comprehensive results across different mathematical subjects. It is evident that models like
Claude3.5-Sonnet and GPT-4o in close-source MLLMs excel across multiple subjects, particularly in
_functions and equations, probability and statistics, and algebraic operations. Conversely, open-source_
models show a more varied performance with notable strengths in certain areas but generally lower
scores compared to close-source models. For instance, InternVL-1.2-Plus and InternVL-Chat-V1.5
perform relatively well in plane geometry, and functions and equations. These detailed performance
on different subjects provide valuable insights into the specific strengths and weaknesses of various
MLLMs. Additionally, results on physics and chemistry across different subjects are presented in
Appendix D.2 and Appendix D.3, respectively. Case studies illustrating correct responses by MLLMs
can be found in Appendix E.
**3.3** **Error Analysis**
To analyze the causes of errors in model responses, we meticulously review incorrect answers to
identify common patterns. We specifically focus on the advanced MLLM, GPT-4o, to illustrate
specific instances of errors and their distributions across the disciplines of mathematics, physics, and
chemistry. Figure 4 demonstrates the distributions of these errors, categorizing them into several
types such as reasoning error, knowledge error, calculation error, vision recognition error, and
question misunderstood error. Notably, across all disciplines, reasoning errors are the most prevalent,
indicating a challenge in model’s ability to solve scientific problems that involve visual information.
Specifically, reasoning errors account for 56.5% of the total errors in mathematics, 50.1% in physics,
and 40.6% in chemistry, respectively. This is followed by knowledge error, which is particularly
significant in chemistry, constituting 33.2% of the errors in that discipline. Similarly, knowledge
error also represent the second most common error type in physics. However, knowledge error in
mathematics is less prevalent, making up only 8.8% of the total errors. This indicates that while the
model struggle with conceptual and fundamental principles in chemistry and physics, it demonstrate
a better grasp of mathematical concepts. Vision recognition error is another significant type of error,
accounting for 18.8% of the errors in mathematics, making it the second most prevalent error type in
this discipline. This error category is also significant in physics and chemistry, where it constitutes
17.8% and 15.3% of the errors, respectively. This type of error highlights the challenges faced by the
model in processing and understanding visual information. Furthermore, calculation error accounts
for a small portion of the errors, especially in chemistry, indicating that the model excels particularly
in handling numerical computations. Figure 5 shows some cases of reasoning error category in the
disciplines of mathematics, physics, and chemistry. More detailed examples of these errors can be
found in Appendix F.
-----
Question
Misunderstood
Error
Knowledge Error
6.3%
8.8% Calculation
Error
9.7%
56.5%
Reasoning
Error 18.8%
Vision
Recognition
Error
(a) Mathematics
Question
Misunderstood
Error
Calculation
Error
5.4%
8.7%
Vision
Recognition
Reasoning 50.1% 17.8% Error
Error
18.0%
Knowledge Error
(b) Physics
Question
Misunderstood
Error
Calculation
Error
5.1%
Vision
Reasoning 5.8% Recognition
Error Error
40.6% 15.3%
33.2%
Knowledge Error
(c) Chemistry
Figure 4: Error distributions of GPT-4o on VisScience across the disciplines of mathematics, physics,
and chemistry.
-----
Model LLM Input Mathematics Physics Chemistry
_Close Source Models (APIs)_
_Text-only LLMs_
Zero-shot ChatGPT - _Q_ 22.4 22.7 18.6
Zero-shot GPT-4 - _Q_ 25.9 30.4 33.1
Zero-shot Claude-2 - _Q_ 27.3 22.0 24.4
Zero-shot Claude3-Opus - _Q_ 29.3 30.8 32.5
Zero-shot Claude3.5-Sonnet - _Q_ 29.7 35.3 36.9
Zero-shot GPT-4o - _Q_ 31.1 38.0 39.6
2-shot CoT Claude2 - _Q_ 27.8 21.7 23.9
2-shot CoT ChatGPT - _Q_ 20.2 18.6 21.3
2-shot CoT GPT-4 - _Q_ 32.1 31.5 32.4
_Multi-modal LLMs_
Gemini-1.0-Pro - _Q, I_ 26.6 23.70 27.8
Gemini-1.5-Pro - _Q, I_ 49.4 38.1 47.0
GPT-4o - _Q, I_ 51.7 38.2 41.6
GPT-4o-mini - _Q, I_ 42.6 29.8 28.4
Qwen-VL-Max - _Q, I_ 35.5 30.70 42.5
Qwen-VL-Plus - _Q, I_ 27.6 26.5 37.7
Claude3.5-Sonnet - _Q, I_ 53.4 38.0 43.1
Claude-3 opus - _Q, I_ 34.4 31.1 34.1
GLM-4V - _Q, I_ 24.2 19.2 25.0
Step-1V - _Q, I_ 28.1 23.5 25.0
_Open Source Models_
_General Multi-modal LLMs_
mPLUG-Owl LLaMA-7B _Q, I_ 7.6 8.3 9.5
LLaMA-Adapter-V2 LLaMA-7B _Q, I_ 9.6 10.3 10.8
MiniCPM-Llama3-V2.5 LLaMA3-8B _Q, I_ 15.4 17.9 19.5
LLaVA-1.5 Vicuna-13B _Q, I_ 15.5 15.2 18.8
LLaVA-1.5 Vicuna-7B _Q, I_ 13.0 13.5 16.0
DeepSeek-VL DeepSeek-LLM-7B _Q, I_ 8.3 16.8 21.0
ShareGPT4V Vicuna-7B _Q, I_ 15.7 14.0 19.0
ShareGPT4V Vicuna-13B _Q, I_ 16.4 14.9 18.4
SPHINX-Plus LLaMA2-13B _Q, I_ 17.0 15.3 20.4
InternLM-XC2 InternLM2-7B _Q, I_ 24.9 18.3 25.6
InternVL-1.2-Plus Nous-Hermes-2-Yi-34B _Q, I_ 30.1 24.8 31.2
InternVL-Chat-V1.5 Mixtral 8*7B _Q, I_ 26.9 20.8 23.7
CogVLM Vicuna-7B _Q, I_ 16.7 14.5 17.0
CogVLM2 LLaMA-3-8B _Q, I_ 23.2 14.4 21.0
GLM-4V-9B GLM-4-9B _Q, I_ 24.7 19.3 22.5
Table 2: Results on VisScience within the version of the Chinese language across the disciplines
**of mathematics, physics, and chemistry. For input, Q represents for question, I represents for**
image. The highest scores among close-source and open-source models are highlighted in red and
blue, respectively.
**4** **Related Works**
**4.1** **Multi-modal Reasoning Benchmarks**
Recently, the evaluation of multi-modal large language models (MLLMs) [39, 42, 6, 8, 46, 31, 32]
in various reasoning tasks has become increasingly crucial. So many benchmark datasets for
these tasks span several categories are proposed like MME [22], MMMU [50], MMBench [33],
MMStar [14], SEED-Bench [28], and CMMMU [52], which evaluate models’ capabilities to apply
-----
Mathematics
Model
ALL PlaneG SolidG Fun Alg Stat Comb
_Close Source Models (APIs)_
_Text-only LLMs_
Zero-shot ChatGPT 22.40 20.18 11.93 18.33 13.63 15.25 26.98
Zero-shot GPT-4 25.90 30.73 18.35 28.33 17.53 24.58 33.33
Zero-shot Claude-2 27.30 27.06 25.69 25.83 31.17 31.36 25.40
Zero-shot Claude3-Opus 29.30 30.28 21.10 32.50 27.27 34.75 31.75
Zero-shot Claude3.5-Sonnet 29.70 33.94 15.60 33.33 27.27 27.12 34.92
Zero-shot GPT-4o 31.10 36.24 24.77 35.83 25.32 24.58 31.75
2-shot CoT Claude2 27.80 30.05 26.61 25.00 28.57 27.97 26.98
2-shot CoT ChatGPT 20.20 23.17 20.18 19.17 17.53 22.88 14.29
2-shot CoT GPT-4 32.10 37.16 31.19 28.33 22.08 30.51 38.10
_Multi-modal LLMs_
Gemini-1.0-Pro 26.60 24.08 22.02 23.73 35.71 29.66 34.92
Gemini-1.5-Pro 49.40 48.74 33.03 47.06 61.69 55.93 52.38
GPT-4o 51.70 48.17 44.04 57.50 68.18 56.78 41.27
GPT-4o-mini 42.60 41.28 29.36 44.17 54.55 44.92 38.10
Qwen-VL-Max 35.50 34.86 27.52 35.83 50.00 33.05 26.98
Qwen-VL-Plus 27.60 27.98 18.35 29.17 31.17 35.59 20.63
Claude3.5-Sonnet 53.4 50.23 35.78 57.50 74.03 63.56 39.68
Claude3-Opus 34.40 35.31 24.77 29.17 45.45 35.59 31.75
GLM-4V 24.20 28.57 30.28 22.50 20.26 21.37 17.46
Step-1V 28.10 31.68 24.71 23.15 48.85 40.57 22.64
_Open Source Models_
_General Multi-modal LLMs_
mPLUG-Owl 7.60 6.19 10.09 5.00 12.34 7.63 7.94
LLaMA-Adapter-V2 9.60 10.78 10.09 7.50 9.09 13.56 4.76
MiniCPM-Llama3-V2.5 15.40 23.62 19.27 15.83 26.62 26.27 15.87
LLaVA-1.5-13B 15.50 15.83 15.60 12.50 18.83 14.41 14.29
LLaVA-1.5-7B 13.00 12.84 12.84 15.83 14.29 11.86 11.11
DeepSeek-VL 8.30 13.99 8.26 10.00 11.04 10.17 7.94
ShareGPT4V-7B 15.70 16.06 16.51 13.33 14.29 17.80 17.46
ShareGPT4V-13B 16.40 15.60 11.93 19.17 17.53 22.03 14.29
SPHINX-Plus 17.00 21.79 19.27 15.83 20.13 22.88 7.94
InternLM-XC2 24.90 25.92 22.02 22.50 27.92 27.97 20.63
InternVL-1.2-Plus 30.10 34.40 25.69 30.00 29.87 26.27 23.81
InternVL-Chat-V1.5 26.90 28.44 25.69 23.33 29.87 24.58 26.98
CogVLM 16.70 16.06 23.85 17.50 17.53 13.56 19.05
CogVLM2 23.20 21.56 22.02 29.17 22.73 26.27 20.63
GLM-4V-9B 14.70 25.23 20.18 19.17 27.27 33.05 19.05
Table 3: Results on the mathematical part of VisScience across different subjects. Subjects:
PlaneG: plane geometry, SolidG: solid geometry, Fun: functions and equations, Alg: algebraic
operations, Stat: probability and statistics, Comb: combinatorial mathematics. The highest scores
among close-source and open-source models are highlighted in red and blue, respectively.
-----
**_Question_** **_Question_** As shown in the figure, **_Question_** An interest group conducted an experiment
object A is in close contact where 10g of magnesium
with the horizontal surface, ribbon reacted with 49%
the vertical wall, and object dilute sulfuric acid. The
B. All contact surfaces are relationship between the
smooth, and both A and B mass of hydrogen gas
are stationary. Which of produced and the mass of
Given a tetrahedron ABCD whose four vertices are the vertices of a the following statements dilute sulfuric acid is
cuboid (the cuboid is an imaginary figure used as an auxiliary), are correct? shown in the figure
which of the following sequences represents the front view, left (assuming the magnesium ribbon contains no impurities other
view, and top view of the tetrahedron ABCD (using to represent the diagrams)?A. ①②⑥ B. ①②③ ①②③④⑤⑥ A: Object A is subjected to three normal forces.B: Object A is subjected to two normal forces.C: Object B is subjected to two normal forces.D: Both object A and object B are subjected to only one normal force. than surface magnesium oxide).Calculate the mass fraction of magnesium element in the magnesium ribbon (show the calculation process).
C. ④⑤⑥ D. ③④⑤ **_Ground Truth:B_** **_Ground Truth:D_** (2)MgO + HGround Truth₂SO₄→ MgSO: ₄( + H1)0.5 ₂O (3)84%
**_GPT-4o_** **_Vision Recognition Error_** **_GPT-4o_** **_Reasoning Error_** **_GPT-4o Reasoning Error_**
Based on the tetrahedron (ABCD) in the image and The problem involves analyzing the normal forces acting on objects A and B that are stationary and in contact with smooth **Calculate the Mass Fraction of Magnesium**
considering the auxiliary structure of the cuboid to surfaces. Since the surfaces are smooth, there will be no **Element in the Magnesium Strip**
determine the front view, left view, and top view:1. Front view: When viewed from the front, vertex (C) is at the top, and vertices (A) and (D) are at the bottom (this frictional forces, only normal forces.Let's analyze each object:Object A:- A is in contact with the horizontal surface. Therefore, there 1.The maximum mass of hydrogen produced is 0.5 g when using 70 g of 49% dilute sulfuric acid, which is: 70 g × 0.49 = 34.3 g
should be diagram ③). will be a normal force exerted by the horizontal surface on A. 2.Using the stoichiometry of the reaction: Mg + H₂SO₄ →
- A is in contact with a vertical wall. Hence, there will be a MgSO₄ + H₂
2. Left view: When viewed from the left, vertex (C) is at normal force exerted by the wall on A.
the top, vertex (B) is on the left, and vertex (D) is on the - A is in contact with object B. Thus, there will be a normal From the equation: 1 mol Mg → 1 mol H₂ and Mass of 1
right (this should be diagram ④). force exerted by B on A. mol H₂ = 2.016 g Mass of 1 mol Mg = 24.305 g
3. Top view: When viewed from above, vertex (C) is on one side, and vertices (A) and (B) are on the lower side So, object A experiences three normal forces.Object B:- B is in contact with the horizontal surface. Therefore, there Given: 0.5 g HMg ₂ × (24.305 g Mg / 2.016 g H₂) = 6.025 g
(this should be diagram ⑤). will be a normal force exerted by the horizontal surface on B- B is in contact with object A. Hence, there will be a normal . 3.Calculate the mass fraction of magnesium in the magnesium strip: Mass fraction of Mg = (Mass of
force exerted by A on B. Mg / Total mass of Mg strip) × 100% Mass fraction
Therefore, the answer is D. ③④⑤. So, object B experiences two normal forces. of Mg = (6.025 g / 10 g) × 100% = 60.25%
Thus, the correct options are:- A: Object A experiences three normal forces. Therefore, the mass fraction of magnesium in the magnesium strip is 60.25%.
- C: Object B experiences two normal forces.
Figure 5: Cases of errors from GPT-4o in the disciplines of mathematics, physics, and chemistry.
logic and inference; mathematical reasoning; scientific reasoning, and agent-based reasoning. These
benchmark datasets provide comprehensive measurements of MLLMs’ capabilities in applying
specialized knowledge and decision-making in simulated environments. For instance, MMMU
covers university-level questions from six domains, which is utilized to assess MLLMs’ advanced
perception and reasoning abilities. CMMMU [52] evaluates models’ reasoning abilities across various
disciplines through bilingual multi-modal questions in Chinese and English. Existing benchmark like
ScienceQA [36] is a specialized dataset designed to evaluate the capabilities of MLLMs, particularly
in the domain of scientific reasoning. It contains over 21,000 questions that span a wide range
of scientific disciplines, including physics, chemistry, biology, and earth science. Furthermore,
several benchmarks such as MathVista [37], MathVerse [53], and Math-Vision [45] are specially
designed to evaluate the mathematical reasoning capabilities of MLLMs. MathVista comprises 6,141
examples drawn from 31 multi-modal datasets, with a commonly used minitest subset containing
1,000 examples specifically designed to evaluate the mathematical reasoning capabilities of MLLMs.
MathVerse contains 2,612 high-quality, multi-subject mathematical problems covering key areas
like plane geometry, solid geometry, and functions. Each problem is transformed into six distinct
versions to provide varying levels of visual information: Text-dominant Version, Text-lite Version,
Text-only Version, Vision-intensive Version, Vision-dominant Version, and Vision-only Version.
Math-Vision is a carefully curated collection of 3,040 high-quality math problems, sourced from
real math competitions, which covers 16 distinct mathematical disciplines and is graded across
five levels of difficulty. While these benchmarks are valuable, they present limitations such as
an overemphasis on mathematics and a broad array of topics that often lack depth in sciencerelated questions and exhibit uneven difficulty levels. Our dataset addresses these shortcomings by
providing 3,000 scientific reasoning questions across mathematics, physics, and chemistry, which is
collected from K12 education. Additionally, it includes bilingual questions in Chinese and English,
enriching the knowledge base and offering a more extensive range of difficulty levels to create a more
comprehensive evaluation platform.
**4.2** **Multi-modal Large Language Models**
Recently, the success of large language models (LLMs) [21, 51, 1, 23, 25, 7] has spurred the ongoing
development of multi-modal large language models (MLLMs). These MLLMs [32, 30, 46, 29, 19, 7]
expand upon traditional LLM capabilities by integrating the ability to process and analyze both
text and images. For instance, models like miniGPT [54] and InstructBLIP [19] attempt to utilize
-----
a trainable Q-Former or a linear layer to connect a frozen pretrained vision encoder and language
model. Subsequently, LLaVA [32, 30] presents visual instruction tuning, which achieves a end-to-end
fine-tuning on a large multi-modal model (LMM) comprising visual encoder and language model.
Currently, close-source MLLMs like Gemini [42], GPT-4v [39], Qwen-VL [8], and Claude3 [6]
demonstrate impressive capabilities in general image understanding and scientific reasoning. Besides, the development of open-source multi-modal large language models (MLLMs) continues
to expand, providing an important complement to their closed-source models. These open-source
MLLMs, such as mPLUG-Owl [48, 49], LLaMA-Adapter-V2 [23], MiniCPM [26], LLaVA-1.5 [31],
LLaVA-NeXT [30], DeepSeek-VL [34], ShareGPT4V [13], SPHINX [24], InternVL [15], InternVL
1.5 [16], InternLM-XComposer2 [20], and CogVLM [46], also achieves advance performance, further
enriching the landscape of MLLM domain. Here, we utilize our specially curated benchmark to
evaluate these MLLMs across tasks in mathematics, physics, and chemistry. This comprehensive
evaluation aims to assess their capabilities in image understanding and scientific reasoning.
**5** **Conclusion**
In this paper, we introduce a comprehensive benchmark, VisScience, to evaluate the capabilities of
MLLMs in multi-modal scientific reasoning. VisScience comprises 3,000 questions across three
disciplines of mathematics, physics, and chemistry, spanning 21 subjects and 5 difficulty levels.
We conduct evaluations on VisScience with a total of 25 prominent models, including close-source
and open-source models. Experimental results demonstrate that close-source MLLMs generally
outperform open-source models, showing particularly more better capabilities in complex problemsolving and analytical reasoning. Notable models such as Claude3.5-Sonnet, GPT-4o, and Gemini-1.5Pro exhibit superior performance across three disciplines. Specifically, Claude3.5-Sonnet achieves an
accuracy of 53.4% in mathematics, GPT-4o accomplishes a 38.2% accuracy in physics, and Gemini1.5-Pro reaches an accuracy of 30.1% in chemistry. Although a gap remains between open-source
models and the best-performing closed-source models, the open-source model like InternVL-1.2-Plus
exhibits competitive advantages. For instance, InternVL-1.2-Plus outperforms models like Gemini1.0-Pro across all three disciplines. By providing a comprehensive and challenging set of questions
across three scientific disciplines, VisScience ensures a robust assessment of MLLMs’ ability in
scientific reasoning.
**References**
[1] Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman,
Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report. arXiv
_preprint arXiv:2303.08774, 2023._
[[2] Zhipu AI. Glm-4v, 2023. URL https://open.bigmodel.cn/dev/api#glm-4v.](https://open.bigmodel.cn/dev/api#glm-4v)
[3] Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak
Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, et al. Palm 2 technical report. arXiv preprint
_arXiv:2305.10403, 2023._
[[4] Anthropic. Claude 2, 2023. URL https://www.anthropic.com/index/claude-2.](https://www.anthropic.com/index/claude-2)
[[5] Anthropic. Claude 3.5, 2023. URL https://www.anthropic.com/news/claude-3-5-sonnet.](https://www.anthropic.com/news/claude-3-5-sonnet)
[[6] Anthropic. The claude 3 model family: Opus, sonnet, haiku. 2024. URL https://www-cdn.anthropic.](https://www-cdn.anthropic.com/de8ba9b01c9ab7cbabf5c33b80b7bbc618857627/Model_Card_Claude_3.pdf)
```
com/de8ba9b01c9ab7cbabf5c33b80b7bbc618857627/Model_Card_Claude_3.pdf.
```
[7] Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han,
Fei Huang, et al. Qwen technical report. arXiv preprint arXiv:2309.16609, 2023.
[8] Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, and
Jingren Zhou. Qwen-vl: A frontier large vision-language model with versatile abilities. arXiv preprint
_arXiv:2308.12966, 2023._
[9] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind
Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners.
_Advances in neural information processing systems, 33:1877–1901, 2020._
[10] Jie Cao and Jing Xiao. An augmented benchmark dataset for geometric question answering through dual
parallel text encoding. In Proceedings of the 29th International Conference on Computational Linguistics,
pages 1511–1520, 2022.
-----
[11] Jiaqi Chen, Jianheng Tang, Jinghui Qin, Xiaodan Liang, Lingbo Liu, Eric P Xing, and Liang Lin. Geoqa:
A geometric question answering benchmark towards multimodal numerical reasoning. arXiv preprint
_arXiv:2105.14517, 2021._
[12] Jiaqi Chen, Tong Li, Jinghui Qin, Pan Lu, Liang Lin, Chongyu Chen, and Xiaodan Liang. Unigeo: Unifying
geometry logical reasoning via reformulating mathematical expression. arXiv preprint arXiv:2212.02746,
2022.
[13] Lin Chen, Jisong Li, Xiaoyi Dong, Pan Zhang, Conghui He, Jiaqi Wang, Feng Zhao, and Dahua Lin.
Sharegpt4v: Improving large multi-modal models with better captions. arXiv preprint arXiv:2311.12793,
2023.
[14] Lin Chen, Jinsong Li, Xiaoyi Dong, Pan Zhang, Yuhang Zang, Zehui Chen, Haodong Duan, Jiaqi Wang,
Yu Qiao, Dahua Lin, et al. Are we on the right way for evaluating large vision-language models? arXiv
_preprint arXiv:2403.20330, 2024._
[15] Zhe Chen, Jiannan Wu, Wenhai Wang, Weijie Su, Guo Chen, Sen Xing, Muyan Zhong, Qinglong Zhang,
Xizhou Zhu, Lewei Lu, Bin Li, Ping Luo, Tong Lu, Yu Qiao, and Jifeng Dai. Internvl: Scaling up vision
foundation models and aligning for generic visual-linguistic tasks. arXiv preprint arXiv:2312.14238, 2023.
[16] Zhe Chen, Weiyun Wang, Hao Tian, Shenglong Ye, Zhangwei Gao, Erfei Cui, Wenwen Tong, Kongzhi
Hu, Jiapeng Luo, Zheng Ma, et al. How far are we to gpt-4v? closing the gap to commercial multimodal
models with open-source suites. arXiv preprint arXiv:2404.16821, 2024.
[17] Zhe Chen, Jiannan Wu, Wenhai Wang, Weijie Su, Guo Chen, Sen Xing, Muyan Zhong, Qinglong Zhang,
Xizhou Zhu, Lewei Lu, et al. Internvl: Scaling up vision foundation models and aligning for generic
visual-linguistic tasks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern
_Recognition, pages 24185–24198, 2024._
[18] Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts,
Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language
modeling with pathways. Journal of Machine Learning Research, 24(240):1–113, 2023.
[19] Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang
Li, Pascale N Fung, and Steven Hoi. Instructblip: Towards general-purpose vision-language models with
instruction tuning. Advances in Neural Information Processing Systems, 36, 2024.
[20] Xiaoyi Dong, Pan Zhang, Yuhang Zang, Yuhang Cao, Bin Wang, Linke Ouyang, Xilin Wei, Songyang
Zhang, Haodong Duan, Maosong Cao, et al. Internlm-xcomposer2: Mastering free-form text-image
composition and comprehension in vision-language large model. arXiv preprint arXiv:2401.16420, 2024.
[21] Zhengxiao Du, Yujie Qian, Xiao Liu, Ming Ding, Jiezhong Qiu, Zhilin Yang, and Jie Tang. Glm: General
language model pretraining with autoregressive blank infilling. arXiv preprint arXiv:2103.10360, 2021.
[22] Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Zhenyu Qiu, Wei Lin,
Jinrui Yang, Xiawu Zheng, et al. Mme: A comprehensive evaluation benchmark for multimodal large
language models. arXiv preprint arXiv:2306.13394, 2023.
[23] Peng Gao, Jiaming Han, Renrui Zhang, Ziyi Lin, Shijie Geng, Aojun Zhou, Wei Zhang, Pan Lu, Conghui
He, Xiangyu Yue, et al. Llama-adapter v2: Parameter-efficient visual instruction model. arXiv preprint
_arXiv:2304.15010, 2023._
[24] Peng Gao, Renrui Zhang, Chris Liu, Longtian Qiu, Siyuan Huang, Weifeng Lin, Shitian Zhao, Shijie Geng,
Ziyi Lin, Peng Jin, et al. Sphinx-x: Scaling data and parameters for a family of multi-modal large language
models. arXiv preprint arXiv:2402.05935, 2024.
[25] Team GLM, Aohan Zeng, Bin Xu, Bowen Wang, Chenhui Zhang, Da Yin, Diego Rojas, Guanyu Feng,
Hanlin Zhao, Hanyu Lai, et al. Chatglm: A family of large language models from glm-130b to glm-4 all
tools. arXiv preprint arXiv:2406.12793, 2024.
[26] Shengding Hu, Yuge Tu, Xu Han, Chaoqun He, Ganqu Cui, Xiang Long, Zhi Zheng, Yewei Fang, Yuxiang
Huang, Weilin Zhao, et al. Minicpm: Unveiling the potential of small language models with scalable
training strategies. arXiv preprint arXiv:2404.06395, 2024.
[27] Aniruddha Kembhavi, Minjoon Seo, Dustin Schwenk, Jonghyun Choi, Ali Farhadi, and Hannaneh Hajishirzi. Are you smarter than a sixth grader? textbook question answering for multimodal machine
comprehension. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages
5376–5384, 2017. doi: 10.1109/CVPR.2017.571.
[28] Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, and Ying Shan. Seed-bench: Benchmarking
multimodal llms with generative comprehension. arXiv preprint arXiv:2307.16125, 2023.
[29] Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. Blip-2: Bootstrapping language-image pre-training
with frozen image encoders and large language models. In International conference on machine learning,
pages 19730–19742. PMLR, 2023.
-----
[30] Haotian Liu, Chunyuan Li, Yuheng Li, Bo Li, Yuanhan Zhang, Sheng Shen, and Yong Jae Lee. Llava-next:
Improved reasoning, ocr, and world knowledge (january 2024). URL https://llava-vl. github. io/blog/2024_01-30-llava-next, 1(8)._
[31] Haotian Liu, Chunyuan Li, Yuheng Li, and Yong Jae Lee. Improved baselines with visual instruction
tuning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages
26296–26306, 2024.
[32] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. Advances in neural
_information processing systems, 36, 2024._
[33] Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi
Wang, Conghui He, Ziwei Liu, et al. Mmbench: Is your multi-modal model an all-around player? arXiv
_preprint arXiv:2307.06281, 2023._
[34] Haoyu Lu, Wen Liu, Bo Zhang, Bingxuan Wang, Kai Dong, Bo Liu, Jingxiang Sun, Tongzheng Ren,
Zhuoshu Li, Yaofeng Sun, et al. Deepseek-vl: towards real-world vision-language understanding. arXiv
_preprint arXiv:2403.05525, 2024._
[35] Pan Lu, Ran Gong, Shibiao Jiang, Liang Qiu, Siyuan Huang, Xiaodan Liang, and Song-Chun Zhu. Intergps: Interpretable geometry problem solving with formal language and symbolic reasoning. arXiv preprint
_arXiv:2105.04165, 2021._
[36] Pan Lu, Swaroop Mishra, Tanglin Xia, Liang Qiu, Kai-Wei Chang, Song-Chun Zhu, Oyvind Tafjord, Peter
Clark, and Ashwin Kalyan. Learn to explain: Multimodal reasoning via thought chains for science question
answering. Advances in Neural Information Processing Systems, 35:2507–2521, 2022.
[37] Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei
Chang, Michel Galley, and Jianfeng Gao. Mathvista: Evaluating mathematical reasoning of foundation
models in visual contexts. arXiv preprint arXiv:2310.02255, 2023.
[[38] OpenAI. Chatgpt, 2022. URL https://openai.com/blog/chatgpt.](https://openai.com/blog/chatgpt)
[[39] OpenAI. Gpt-4v(ision) system card. In technical report, 2023. URL https://api.semanticscholar.](https://api.semanticscholar.org/CorpusID:263218031)
```
org/CorpusID:263218031.
```
[[40] OpenAI. Gpt-4o, 2024. URL https://openai.com/index/hello-gpt-4o/.](https://openai.com/index/hello-gpt-4o/)
[[41] StepFun. Step-1v, 2024. URL https://open.bigmodel.cn/dev/api#glm-4v.](https://open.bigmodel.cn/dev/api#glm-4v)
[42] Gemini Team, Rohan Anil, Sebastian Borgeaud, Yonghui Wu, Jean-Baptiste Alayrac, Jiahui Yu, Radu
Soricut, Johan Schalkwyk, Andrew M Dai, Anja Hauth, et al. Gemini: a family of highly capable
multimodal models. arXiv preprint arXiv:2312.11805, 2023.
[43] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix,
Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation
language models. arXiv preprint arXiv:2302.13971, 2023.
[44] Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay
Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and
fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023.
[45] Ke Wang, Junting Pan, Weikang Shi, Zimu Lu, Mingjie Zhan, and Hongsheng Li. Measuring multimodal
mathematical reasoning with math-vision dataset. arXiv preprint arXiv:2402.14804, 2024.
[46] Weihan Wang, Qingsong Lv, Wenmeng Yu, Wenyi Hong, Ji Qi, Yan Wang, Junhui Ji, Zhuoyi Yang,
Lei Zhao, Xixuan Song, et al. Cogvlm: Visual expert for pretrained language models. arXiv preprint
_arXiv:2311.03079, 2023._
[47] Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R Loomba,
Shichang Zhang, Yizhou Sun, and Wei Wang. Scibench: Evaluating college-level scientific problem-solving
abilities of large language models. arXiv preprint arXiv:2307.10635, 2023.
[48] Qinghao Ye, Haiyang Xu, Guohai Xu, Jiabo Ye, Ming Yan, Yiyang Zhou, Junyang Wang, Anwen Hu,
Pengcheng Shi, Yaya Shi, et al. mplug-owl: Modularization empowers large language models with
multimodality. arXiv preprint arXiv:2304.14178, 2023.
[49] Qinghao Ye, Haiyang Xu, Jiabo Ye, Ming Yan, Anwen Hu, Haowei Liu, Qi Qian, Ji Zhang, and Fei
Huang. mplug-owl2: Revolutionizing multi-modal large language model with modality collaboration. In
_Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13040–_
13051, 2024.
[50] Xiang Yue, Yuansheng Ni, Kai Zhang, Tianyu Zheng, Ruoqi Liu, Ge Zhang, Samuel Stevens, Dongfu
Jiang, Weiming Ren, Yuxuan Sun, et al. Mmmu: A massive multi-discipline multimodal understanding
and reasoning benchmark for expert agi. In Proceedings of the IEEE/CVF Conference on Computer Vision
_and Pattern Recognition, pages 9556–9567, 2024._
-----
[51] Aohan Zeng, Xiao Liu, Zhengxiao Du, Zihan Wang, Hanyu Lai, Ming Ding, Zhuoyi Yang, Yifan Xu, Wendi
Zheng, Xiao Xia, et al. Glm-130b: An open bilingual pre-trained model. arXiv preprint arXiv:2210.02414,
2022.
[52] Ge Zhang, Xinrun Du, Bei Chen, Yiming Liang, Tongxu Luo, Tianyu Zheng, Kang Zhu, Yuyang Cheng,
Chunpu Xu, Shuyue Guo, et al. Cmmmu: A chinese massive multi-discipline multimodal understanding
benchmark. arXiv preprint arXiv:2401.11944, 2024.
[53] Renrui Zhang, Dongzhi Jiang, Yichi Zhang, Haokun Lin, Ziyu Guo, Pengshuo Qiu, Aojun Zhou, Pan Lu,
Kai-Wei Chang, Peng Gao, et al. Mathverse: Does your multi-modal llm truly see the diagrams in visual
math problems? arXiv preprint arXiv:2403.14624, 2024.
[54] Deyao Zhu, Jun Chen, Xiaoqian Shen, Xiang Li, and Mohamed Elhoseiny. Minigpt-4: Enhancing
vision-language understanding with advanced large language models. arXiv preprint arXiv:2304.10592,
2023.
-----
**A** **Dataset Details**
**A.1** **Question Length Distribution**
We provide both Chinese and English versions of the VisScience benchmark. The Chinese version
features an average of 162.85 words per question, with the longest question comprising 1,297 words.
Answers in this version average 20.93 words, with the longest reaching 112 words. Conversely, the
English version shows an average of 80.93 words per question, with the longest question spanning 418
words. Answers here average 12.3 words, with the most detailed answer containing 92 words. Figure 6
depicts the distribution of word counts, highlighting the diversity and complexity of questions.
250
200
150
100
50
140
120
100
80
60
40
20
200 400 600 800 1000 1200
|Col1|Count Average Median|
|---|---|
Count
Average
Median
Question Length
(a) Chinese Version
100 200 300 400
|Col1|Count Average Median|
|---|---|
Count
Average
Median
Question Length
(b) English Version
Figure 6: The distributions of word counts per question in the Chinese and English versions of
VisScience.
**A.2** **Detailed Description of Subjects**
VisScience consists of three disciplines: mathematics, physics, and chemistry. The mathematics section includes six subjects: algebraic operations, combinatorial mathematics, functions and equations,
_probability and statistics, plane geometry, and solid geometry. The physics section is composed of_
eight subjects: mechanics, optics, modern physics, mechanical motion, electromagnetism, vibrations
_and waves, comprehensive experiments and methods, and thermodynamics. The chemistry section_
includes seven subjects: chemical experiments, chemical reactions, inorganic chemistry, organic
_chemistry, electrochemistry, substance composition, and chemical equilibrium. A more detailed_
introduction of the above subjects is presented as follows:
**A.2.1** **Mathematics**
**Algebraic Operations. Algebraic operations include the manipulation of algebraic expressions,**
such as addition, subtraction, multiplication, and division. They are fundamental for solving algebraic
equations and inequalities and are widely applied across various fields of mathematics.
**Combinatorial Mathematics. Combinatorial mathematics studies the counting, arrangement, and**
combination of discrete structures, involving graph theory, number theory, and coding theory. It has
significant applications in computer science, optimization, and probability theory.
**Functions and Equations. Functions and equations are core parts of mathematics, dealing with**
relationships between variables and their representations. Functions are mappings between inputs
and outputs, while equations are equalities concerning these mappings. Mastering knowledge of
functions and equations is fundamental for solving many practical problems and is widely applied in
engineering, physics, and economics.
**Probability and Statistics. Probability and statistics study the laws of random events and methods**
of data analysis, including probability distributions, statistical inference, and data analysis techniques.
They have broad applications in scientific research, engineering, and economics.
-----
**Plane Geometry. Plane geometry studies the shapes and figures in two-dimensional space, including**
points, lines, angles, and polygons. It is a fundamental part of mathematics education.
**Solid Geometry. Solid geometry involves the study of geometric shapes in three-dimensional space,**
including points, lines, surfaces, and polyhedra. It examines the properties, volumes, and surface
areas of these geometric bodies and is foundational for architecture, physics, and engineering.
**A.2.2** **Physics**
**Mechanics. Mechanics studies the motion of objects and the forces acting upon them, including**
classical mechanics, quantum mechanics, and relativistic mechanics. It is the foundation of physics
and is widely applied in engineering, astronomy, and materials science.
**Optics. Optics studies the properties of light and its interactions with matter, including reflection,**
refraction, interference, and diffraction. Optical technologies have broad applications in imaging,
communication, and laser technology.
**Modern Physics.** Modern physics includes theories developed since the 20th century, such as
quantum mechanics, relativity, and particle physics. These theories have expanded our understanding
of the fundamental laws of nature.
**Mechanical Motion. Mechanical motion studies the movement of objects under the influence of**
forces, including linear motion, rotational motion, and vibration. Understanding mechanical motion
is fundamental for the design and analysis of mechanical systems.
**Electromagnetism. Electromagnetism studies the interactions between electric and magnetic fields,**
including electrostatics, magnetic fields, and electromagnetic waves. It is the basis of modern physics
and electrical engineering.
**Vibration and Waves.** Vibration and waves study vibrating systems and wave phenomena,
including sound waves, light waves, and electromagnetic waves. They have broad applications
in communication, acoustics, and optical technologies.
**Comprehensive Experiments and Methods. Comprehensive experiments and methods involve**
using various experimental techniques and methods in physics teaching and research. They include
designing and conducting experiments to observe and analyze the effects of specific variables on
outcomes. Through comprehensive experiments, students can grasp the complexities of scientific
research, cultivate scientific reasoning abilities, and understand the meticulousness and uncertainties
of experimental work.
**Thermodynamics. Thermodynamics studies the processes of energy transformation and transfer,**
including the laws of thermodynamics, thermodynamic systems, phase transitions, and heat engines.
Thermodynamics is a fundamental aspect of both physics and engineering, with broad applications
in energy, environmental science, and materials science. By investigating the relationship between
internal and external energy of objects, thermodynamics reveals the basic principles of energy
conversion and transfer in nature, providing theoretical support for the development of modern
industrial technology.
**A.2.3** **Chemistry**
**Chemical Experiment.** Chemical experiments involve studying the properties and changes of
substances through experimental methods. Students learn to design experiments, observe chemical
reactions, collect and analyze data, and draw conclusions in chemical experiments. Chemical
experiments play a crucial role in understanding chemical theories and applying chemical knowledge.
**Chemical Reaction. Chemical reactions study the chemical changes between substances, including**
reaction types, mechanisms, and rates. Understanding chemical reactions is essential for predicting
and controlling chemical processes, which have wide applications in pharmaceutical manufacturing,
materials science, and environmental engineering.
-----
**Inorganic Chemistry.** Inorganic chemistry studies the properties and reactions of non-carbon
elements and their compounds. It covers a wide range of topics from metals and non-metals to
transition metals and coordination compounds and is key to understanding the periodic table of
elements and chemical reaction mechanisms.
**Organic Chemistry. Organic chemistry studies the structure, properties, and reactions of carbon-**
containing compounds. It has significant applications in pharmaceutical chemistry, materials science,
and biochemistry.
**Electrochemistry. Electrochemistry studies the interconversion between electrical and chemical**
energy, including processes such as batteries, electrolysis, and electroplating. Electrochemistry has
important applications in energy storage, corrosion control, and electrochemical sensors.
**Substance Composition. Substance composition studies the chemical composition and structure of**
substances, including the arrangement of molecules, atoms, and ions. It has important applications in
chemistry, materials science, and biology.
**Chemical Equilibrium. Chemical equilibrium studies the behavior of chemical reactions when**
they reach a dynamic equilibrium state, including equilibrium constants, Le Chatelier’s principle,
and solubility equilibrium. Understanding chemical equilibrium is essential for predicting reaction
directions and optimizing chemical processes.
**B** **Dataset Case**
The VisScience dataset consists of 3,000 carefully selected high-quality questions, evenly distributed
across three disciplines: mathematics, physics, and chemistry, with each comprising 1,000 questions.
Each discipline within VisScience encompasses several subjects: mathematics includes six subjects,
physics contains eight subjects, and chemistry comprises seven subjects. To illustrate the diversity and
depth of VisScience, we provide more examples sampled from each discipline. In mathematics, six
subjects include algebraic operations, combinatorial mathematics, functions and equations, probability
and statistics, plane geometry, and solid geometry are illustrated in Figure 7 to Figure 12. Figure 13 to
Figure 20 demonstrate eight subjects within the physics section of VisScience, comprising mechanics,
optics, modern physics, mechanical motion, electromagnetism, vibrations and waves, comprehensive
experiments and methods, and thermodynamics. The chemistry section includes seven subjects:
chemical experiments, chemical reactions, inorganic chemistry, organic chemistry, electrochemistry,
substance composition, and chemical equilibrium, which are illustrated in Figure 21 to Figure 27.
**C** **Evaluation Details**
**C.1** **The Sources of Models**
In Table 4, we present the sources of the models tested on VisScience.
**C.2** **Prompts**
We introduce the prompts used to guide models in generating responses in Chain-of-Thought (CoT)
settings and judging the LLMs’ answers. The specific prompts can be found in Table 5.
**D** **More Experimental Results**
**D.1** **Results on VisScience in English Version**
Table 6 reports a comprehensive comparison of various models on the VisScience benchmark in
the English version. The benchmark evaluates performance across three disciplines: mathematics,
physics, and chemistry. Among close-source models, GPT-4o demonstrates the highest performance
across two disciplines, achieving an accuracy of 53.6% in mathematics and 42.7% in physics.
However, Claude3.5-Sonnet surpasses GPT-4o in chemistry with a higher accuracy of 43.6%. Opensource models generally show lower performance compared to close-source counterparts. Notably,
-----
Model Input LLM Size Source
_Closed Source Models_
_Text-only LLMs_
ChatGPT _Q_ - [gpt-3.5-turbo](https://platform.openai.com/docs/models/gpt-3-5-turbo)
GPT-4 _Q_ - [gpt-4](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4)
Claude-2 _Q_ - [claude-2](https://www.anthropic.com/api)
_Multi-modal LLMs_
Gemini-1.0-Pro _Q, I_ - [gemini-pro](https://ai.google.dev/)
Gemini-1.5-Pro _Q, I_ - [gemini-1.5-pro](https://ai.google.dev/)
GPT-4o _Q, I_ - [gpt-4o](https://platform.openai.com/docs/models/gpt-4o)
Claude3-Opus _Q, I_ - [claude-3-opus-20240229](https://www.anthropic.com/api)
Claude3.5-Sonnet _Q, I_ - [claude-3-5-sonnet-2024620](https://www.anthropic.com/api)
Qwen-VL-Plus _Q, I_ - [qwen-vl-plus](https://help.aliyun.com/zh/dashscope/developer-reference/vl-plus-quick-start)
Qwen-VL-Max _Q, I_ - [qwen-vl-max](https://help.aliyun.com/zh/dashscope/developer-reference/vl-plus-quick-start)
GLM-4V _Q, I_ - [glm-4v](https://open.bigmodel.cn/dev/api#glm-4v)
Step-1V _Q, I_ - [step-1v](https://platform.stepfun.com/docs/llm/vision)
_Open Source Models_
_General Multi-modal LLMs_
mPLUG-Owl _Q, I_ 7B [mPLUG-Owl](https://github.com/X-PLUG/mPLUG-Owl)
DeepSeek-VL _Q, I_ 7B [deepseek-vl-7b-base](https://github.com/deepseek-ai/DeepSeek-VL)
LLaMA-Adapter-V2 _Q, I_ 7B [LLaMA-Adapter V2](https://github.com/ml-lab/LLaMA-Adapter-2)
LLaVA-1.5 _Q, I_ 7B [LLaVA-v1.5-7B](https://github.com/haotian-liu/LLaVA)
LLaVA-1.5 _Q, I_ 13B [LLaVA-v1.5-13B](https://github.com/haotian-liu/LLaVA)
ShareGPT-4V _Q, I_ 7B [ShareGPT4V-7B](https://huggingface.co/Lin-Chen/ShareGPT4V-7B)
ShareGPT-4V _Q, I_ 13B [ShareGPT4V-13B](https://huggingface.co/Lin-Chen/ShareGPT4V-13B)
GLM-4v-9B _Q, I_ 7B [GLM-4v-9B](https://huggingface.co/THUDM/glm-4v-9b)
SPHINX-Plus _Q, I_ 13B [SPHINX-Plus](https://github.com/Alpha-VLLM/LLaMA2-Accessory/blob/main/SPHINX/README.md)
InternVL-Chat-V1.5 _Q, I_ 20B [InternVL 1.5](https://huggingface.co/OpenGVLab/InternVL-Chat-V1-5)
InternVL-1.2-Plus _Q, I_ 34B [InternVL-Chat-V1-2-Plus](https://huggingface.co/OpenGVLab/InternVL-Chat-V1-2-Plus)
InternLM-XC2 _Q, I_ 7B [InternLM-XComposer2-VL-7B](https://huggingface.co/internlm/internlm-xcomposer2-vl-7b)
CogVLM _Q, I_ 17B [CogVLM-17B](https://huggingface.co/THUDM/cogvlm-chat-hf)
CogVLM2 _Q, I_ 19B [cogvlm2-llama3-chat-19B](https://huggingface.co/THUDM/cogvlm-chat-hf)
MiniCPM-Llama3-V-2_5 _Q, I_ 19B [MiniCPM-Llama3-V 2.5](https://huggingface.co/THUDM/cogvlm-chat-hf)
Table 4: The source of the models used in the evaluation.
-----
|Task|Prompt|
|---|---|
|Task|Prompt|
|---|---|
|Response Generation|You are an exceptionally talented mathematics (physics/chemistry) in- structor. Kindly furnish an elaborate, step-by-step solution to the ques- tion.|
|Answer Judgment|You are a highly skilled mathematics (physics/chemistry) teacher. I will provide you with a mathematics (physics/chemistry) problem, along with its ground answer and the model response from the model. Please deter- mine whether the ground answer and the model response are consistent. Note that you do not need to judge the correctness of either answer, only whether they are consistent. If it is a multiple-choice question, both an- swers must choose the exact same option to be considered consistent. If it is a calculation problem, the relative error between the model response and the ground answer must be less than 0.05 to be considered consistent. If the problem has multiple sub-questions, each sub-question’s answer must be identical for consistency. If you find them consistent, please add [Consistent] at the end of your response. If you find them inconsistent, please add [Inconsistent] at the end of your response.|
Table 5: Prompts for response generation and answer judgment.
InternVL-1.2-Plus displays competitive performance, reaching up to 26.0% in mathematics, 23.6%
in physics, and 27.8% in chemistry. The English version of VisScience is designed to facilitate the
evaluation of MLLMs that specialize in English, assessing their capabilities in scientific reasoning.
**D.2** **Results on Physics Across Different Subjects**
Table 7 presents a detailed analysis of various models on VisScience across different subjects within
the physics section, which includes mechanics, electromagnetism, thermodynamics, comprehensive
experiments and methods, optics, vibration and waves, modern physics, and mechanical motion. The
table highlights that while GPT-4o exhibits the top performance on the entire physics discipline,
the best performance in individual subjects varies notably. For instances, Claude3.5-Sonnet excels
specifically in modern physics with an accuracy of 66.67%, significantly surpassing other close-source
models in this area. This variation in performance by subject underscores the specialized capabilities
of different models. Moreover, this detailed analysis provides more insights, emphasizing the need
for targeted improvements to achieve balanced performance across all physics subjects.
**D.3** **Results on Chemistry Across Different Subjects**
Table 8 presents a nuanced view of the performance of various models across different subjects
within the chemistry discipline of the VisScience benchmark. The chemistry discipline includes
chemical experiment, chemical reaction, inorganic chemistry, electrochemistry, organic chemistry,
chemical equilibrium, and substance composition. Notably, Gemini-1.5-Pro stands out among closesource models, excelling across the entire chemistry discipline. It demonstrates particular prowess
in organic chemistry and substance composition, achieving impressive accuracies of 57.02% and
61.16%, respectively. Additionally, Qwen-VL-Max leads in chemical experiment and inorganic
chemistry, achieving the highest accuracies of 46.28% and 51.94%, respectively. Open-source
models demonstrate a range of performances, with InternVL-1.2-Plus leading this group. It achieves
the highest open-source accuracy in nearly all subjects. This comprehensive review of model
performances within the chemistry section of the VisScience benchmark highlights the need to
enhance MLLMs’ capabilities in scientific domains, ensuring models are both accurate and adaptable
across various disciplines.
**E** **Case Study**
The VisScience dataset includes three disciplines: mathematics, physics, and chemistry. The mathematical section comprises 6 subjects, the chemistry section contains 7 subjects, and the physics
-----
Model LLM Input Mathematics Physics Chemistry
_Close Source Models (APIs)_
_Text-only LLMs_
Zero-shot ChatGPT - _Q_ 17.4 20.7 25.2
Zero-shot GPT-4 - _Q_ 29.9 37.7 38.7
Zero-shot Claude-2 - _Q_ 24.6 22.7 25.6
Zero-shot Claude3-Opus - _Q_ 21.7 15.8 29.4
Zero-shot Claude3.5-Sonnet - _Q_ 27.2 35.7 35.2
Zero-shot GPT-4o - _Q_ 35.2 40.3 42.5
2-shot CoT Claude2 - _Q_ 25.7 21.9 24.1
2-shot CoT ChatGPT - _Q_ 24.4 20.1 22.1
2-shot CoT GPT-4 - _Q_ 36.5 39.0 38.1
_Multi-modal LLMs_
Gemini-1.0-Pro - _Q, I_ 26.4 39.1 27.9
Gemini-1.5-Pro - _Q, I_ 47.8 35.1 39.1
GPT-4o - _Q, I_ 53.6 42.7 43.3
GPT-4o-mini - _Q, I_ 43.2 33.7 34.9
Qwen-VL-Max - _Q, I_ 30.7 26.4 36.3
Qwen-VL-Plus - _Q, I_ 21.9 20.9 29.7
Claude3.5-Sonnet - _Q, I_ 50.8 36.6 43.6
Claude3-Opus - _Q, I_ 34.4 29.4 34.7
GLM-4V - _Q, I_ 23.1 18.5 23.4
Step-1V 7B _Q, I_ 32.0 19.5 27.6
_Open Source Models_
_General Multi-modal LLMs_
mPLUG-Owl LLaMA-7B _Q, I_ 7.4 12.3 12.3
LLaMA-Adapter-V2 LLaMA-7B _Q, I_ 12.6 11.4 16.2
MiniCPM-Llama3-V2.5 LLaMA3-8B _Q, I_ 24.4 20.6 24.4
LLaVA-1.5 Vicuna-13B _Q, I_ 15.0 17.4 21.1
LLaVA-1.5 Vicuna-7B _Q, I_ 17.4 16.6 18.9
DeepSeek-VL DeepSeek-LLM-7B _Q, I_ 16.0 16.9 17.8
ShareGPT-4V Vicuna-7B _Q, I_ 14.7 17.7 21.3
ShareGPT-4V Vicuna-13B _Q, I_ 14.5 16.0 20.2
SPHINX-Plus LLaMA2-13B _Q, I_ 17.9 15.7 22.4
InternLM-XC2 InternLM2-7B _Q, I_ 20.7 20.5 25.0
InternVL-1.2-Plus Nous-Hermes-2-Yi-34B _Q, I_ 26.0 23.6 27.8
InternVL-Chat-V1.5 Mixtral 8*7B _Q, I_ 24.9 23.0 25.9
CogVLM Vicuna-7B _Q, I_ 18.5 15.9 23.1
CogVLM2 LLaMA-3-8B _Q, I_ 24.2 16.6 24.9
GLM-4V-9B GLM-4-9B _Q, I_ 24.7 19.2 23.9
Table 6: Results on VisScience within the version of the English language across the disciplines
**of mathematics, physics, and chemistry. The highest scores among close-source and open-source**
models are highlighted in red and blue, respectively.
-----
Physics
Model
ALL Mech Ele Therm Comp Opt Vib & Waves Mod Phys Mech Motion
_Close Source Models (APIs)_
_Text-only LLMs_
Zero-shot ChatGPT 22.70 22.08 19.94 23.53 4.62 40.98 29.79 19.05 23.33
Zero-shot GPT-4 30.40 34.26 30.21 33.33 15.38 40.98 34.04 42.86 20.00
Zero-shot Claude-2 22.00 24.62 23.56 25.49 12.31 27.87 21.28 28.57 23.33
Zero-shot Claude3-Opus 30.80 34.26 32.02 33.33 10.77 39.34 31.91 42.86 10.00
Zero-shot Claude3.5-Sonnet 35.30 40.36 35.95 35.29 15.38 40.98 34.04 47.62 26.67
Zero-shot GPT-4o 38.00 43.91 38.67 45.10 9.23 49.18 38.30 52.38 23.33
2-shot CoT Claude2 21.70 24.87 22.96 25.49 10.77 18.03 23.40 28.57 10.00
2-shot CoT ChatGPT 18.60 20.30 20.54 13.73 12.31 22.95 23.40 23.81 13.33
2-shot CoT GPT-4 31.50 35.03 32.02 37.25 12.31 44.26 29.79 47.62 23.33
_Multi-modal LLMs_
Gemini-1.0-Pro 23.70 26.97 23.03 17.65 6.15 31.15 34.04 19.05 10.00
Gemini-1.5-Pro 38.10 46.56 33.74 47.06 20.00 45.00 34.04 52.38 43.33
GPT-4o 38.20 41.37 39.27 56.86 23.08 42.62 36.17 42.86 43.33
GPT-4o-mini 29.80 31.73 30.51 29.41 10.70 36.07 19.15 47.62 30.00
Qwen-VL-Max 30.70 36.13 26.59 39.22 9.23 34.43 31.91 28.57 30.00
Qwen-VL-Plus 26.50 31.04 24.77 33.33 6.15 36.07 36.17 23.81 16.67
Claude3.5-Sonnet 38.00 41.62 36.56 43.14 13.85 44.26 38.30 66.67 30.00
Claude3-Opus 31.10 33.25 29.91 39.22 12.31 45.90 34.04 61.90 23.33
GLM-4V 19.20 23.16 17.82 15.69 12.31 25.00 17.02 19.05 23.33
Step-1V 23.50 21.55 24.35 28.57 7.84 12.82 25.00 31.25 39.13
_Open Source Models_
_General Multi-modal LLMs_
mPLUG-Owl 8.30 11.93 8.46 1.96 4.62 8.20 10.64 4.76 10.00
LLaMA-Adapter-V2 10.30 10.41 10.88 8.00 4.84 13.11 25.53 14.29 3.33
MiniCPM-Llama3-V2.5 17.90 21.57 19.64 15.69 6.15 26.23 19.15 9.52 23.33
LLaVA-1.5-13B 15.20 17.26 14.80 7.84 7.69 21.31 17.02 9.52 16.67
LLaVA-1.5-7B 13.50 15.28 15.12 11.76 3.12 15.25 15.56 5.26 17.24
DeepSeek-VL 16.80 18.77 19.33 13.73 7.69 16.67 13.04 19.05 3.45
ShareGPT4V-7B 14.00 13.71 15.41 9.80 3.08 19.67 19.15 28.57 6.67
ShareGPT4V-13B 14.90 15.23 16.92 9.80 6.15 14.75 19.15 19.05 16.67
SPHINX-Plus 15.30 16.50 18.43 17.65 4.62 11.48 12.77 19.05 13.33
InternLM-XC2 18.30 20.81 17.82 13.73 10.77 26.23 21.28 14.29 6.67
InternVL-1.2-Plus 24.80 29.69 22.94 29.41 12.31 31.67 25.53 35.00 10.00
InternVL-Chat-V1.5 20.80 23.97 20.87 23.53 9.23 25.42 17.02 14.29 17.24
CogVLM 14.50 18.02 13.29 7.84 6.15 14.75 19.15 19.05 6.67
CogVLM2 14.40 16.75 16.00 12.00 6.15 13.11 19.15 4.76 10.00
GLM-4V-9B 19.30 21.78 21.12 24.00 4.62 25.42 15.91 15.00 13.33
Table 7: Results on the physics part of VisScience across different subjects. Subjects: Mech:
mechanics, Ele: electromagnetism, Threm: thermodynamics, Comp: comprehensive experiments and
methods, Opt: optics, Vib & Waves: vibration and waves, Mod Phys: modern physics, Mech Motion:
mechanical motion. The highest scores among close-source and open-source models are highlighted
in red and blue, respectively.
section includes 8 subjects, culminating in a total of 21 distinct subjects across the VisScience
benchmark. Here, we present one question from each subject, along with its standard answer and the
correct response provided by GPT-4o. Figure 28 to Figure 30 demonstrate cases of the mathematical
part of VisScience. Figure 31 to Figure 34 illustrate sampled questions from the physics section of
the VisScience benchmark. Each figure provides insight into the diverse range of topics covered,
showing GPT-4o’s capabilities to handle complex physical principles and calculations. Figure 35 to
Figure 38 display examples from the chemistry section. These examples not only demonstrate the
diversity of the VisScience benchmark within chemistry but also illustrate how effectively GPT-4o
can generate accurate responses across different scientific subjects.
-----
Chemistry
Model
ALL Chem Exp Chem React Inorg Chem Electrochem Org Chem Chem Equil Sub Comp
_Close Source Models (APIs)_
_Text-only LLMs_
Zero-shot ChatGPT 18.60 26.35 23.86 23.26 23.75 35.43 24.64 30.89
Zero-shot GPT-4 33.10 40.54 30.68 38.76 32.50 36.22 30.43 31.71
Zero-shot Claude-2 24.40 24.32 26.36 31.71 23.86 20.29 0.30 25.98
Zero-shot Claude3-Opus 32.50 37.16 30.68 31.78 31.25 36.22 30.43 39.84
Zero-shot Claude3.5-Sonnet 36.90 34.80 36.93 39.53 46.25 45.67 23.19 47.15
Zero-shot GPT-4o 39.60 42.57 40.34 44.96 35.00 41.73 26.09 54.47
2-shot CoT Claude2 23.90 23.99 26.70 22.48 30.00 26.77 27.54 27.64
2-shot CoT ChatGPT 21.30 19.93 23.30 20.93 22.50 22.83 26.09 30.08
2-shot CoT GPT-4 32.40 29.05 32.39 32.56 32.50 42.52 28.99 53.66
_Multi-modal LLMs_
Gemini-1.0-Pro 27.80 24.03 26.70 26.36 31.25 35.54 31.82 37.19
Gemini-1.5-Pro 47.00 43.46 47.43 51.59 50.00 57.02 35.29 61.16
GPT-4o 41.60 43.58 46.02 38.76 46.25 43.31 43.48 50.41
GPT-4o-mini 28.40 22.30 27.27 27.13 30.00 34.65 20.29 42.09
Qwen-VL-Max 42.50 46.28 41.48 51.94 35.00 41.73 36.23 53.66
Qwen-VL-Plus 37.70 33.78 40.34 44.19 41.25 48.03 33.33 41.80
Claude3.5-Sonnet 43.10 40.54 41.48 42.64 50.00 42.52 33.33 59.35
Claude3-Opus 34.10 35.47 30.11 31.78 31.25 40.16 33.33 51.22
GLM-4V 25.00 23.65 25.86 21.71 28.75 27.78 31.88 32.52
Step-1V 25.00 32.51 27.48 25.26 25.45 17.72 13.33 21.95
_Open Source Models_
_General Multi-modal LLMs_
mPLUG-Owl 9.50 7.77 11.36 7.75 12.50 12.60 13.04 9.76
LLaMA-Adapter-V2 10.80 7.77 13.64 8.53 12.66 12.80 17.65 17.07
MiniCPM-Llama3-V2.5 19.50 20.96 26.29 26.61 18.18 24.00 28.79 30.83
LLaVA-1.5-13B 18.80 15.54 16.48 24.03 20.00 22.05 23.19 19.51
LLaVA-1.5-7B 16.00 13.49 17.14 19.20 16.25 20.49 26.09 10.74
DeepSeek-VL 21.00 18.84 20.57 20.16 21.25 23.62 36.76 20.66
ShareGPT4V-7B 19.00 13.85 19.32 26.36 18.75 23.62 28.99 15.45
ShareGPT4V-13B 18.40 13.51 21.02 19.38 23.75 22.83 13.04 19.51
SPHINX-Plus 20.40 20.27 21.02 24.03 22.50 22.83 27.54 21.95
InternLM-XC2 25.60 22.64 27.27 26.36 21.25 33.86 26.09 24.39
InternVL-1.2-Plus 31.20 22.29 31.82 33.58 31.46 39.57 32.47 38.84
InternVL-Chat-V1.5 23.70 20.07 25.00 25.20 22.37 28.80 25.00 28.46
CogVLM 17.00 15.54 20.45 10.85 16.25 22.05 20.29 17.07
CogVLM2 21.00 13.10 21.39 25.78 20.51 31.45 22.73 30.17
GLM-4V-9B 22.50 21.00 25.44 26.23 23.08 26.83 17.39 25.83
Table 8: Results on the chemistry part of VisScience across different subjects. Subjects: Chem
Exp: chemical experiment, Chem React: chemical reaction, Inorg Chem: inorganic chemistry,
Electrochem: Electrochemistry, Org Chem: organic chemistry, Chem Equil: chemical equilibrium,
and Sub Comp: substance composition. The highest scores among close-source and open-source
models are highlighted in red and blue, respectively.
**F** **Error Case**
We conduct rigorous tests on a series of open-source and close-source models on VisScience and
perform a detailed analysis of the models’ responses. These errors in the models’ answers can be
classified into five categories: reasoning error, vision recognition error, knowledge error, calculation
error, and question misunderstood error. We present examples of these five error types across the
disciplines of mathematics, physics, and chemistry, with a specific focus on errors made by GPT-4o.
Additionally, we demonstrate error examples from other representative close-source models such as
GLM-4V, Qwen-VL-max, and Claude 3.5, as well as open-source models like LLAVA-1.5, GLM-4V9B and InternVL-Chat-1.5. Notably, it should be noted that the types of errors made by these models
in response to the same questions can differ from those made by GPT-4o. This analysis helps to
underline the varied challenges faced by different models in processing complex scientific questions,
providing insight into their respective strengths and limitations. Figure 39 to Figure 48 demonstrate
cases of errors from representative models in the mathematical part of VisScience. Figure 49 to
Figure 61 show the incorrect answers in the physics section. Figure 62 to Figure 70 demonstrate the
errors in the chemistry section.
-----
Figure 7: Cases of algebraic operations in mathematical part of VisScience.
-----
Figure 8: Cases of combinatorial mathematics in mathematical part of VisScience.
-----
Figure 9: Cases of functions and equations in mathematical part of VisScience.
-----
Figure 10: Cases of probability and statistics in mathematical part of VisScience.
-----
Figure 11: Cases of plane geometry in mathematical part of VisScience.
-----
Figure 12: Cases of solid geometry in mathematical part of VisScience.
-----
**_Question_**
As shown in the figure, three identical flat-topped cars a, b, and c are aligned in a straight line and rest on a smooth
horizontal surface. A child on car c jumps to car b, and then immediately jumps from car b to car a with the same
horizontal speed relative to the ground. After jumping onto car a, the child remains stationary relative to car a.
Thereafter ( )
A:a and c have equal speeds
B:a and b have equal speeds
C:The speed relationship of the three cars is v_c > v_a>v_b
D:a and c move in opposite directions
**_Answer_**
CD
**_Question_**
As shown in the figure, in a vertical plane, a positively charged small ball is tied to
one end of an inextensible light string of length L, with the other end of the string
fixed at point O. They are in a uniform electric field, with the direction of the field
being horizontal to the right and the field strength being E. It is known that the force
exerted by the electric field on the small ball equals the gravitational force on the
small ball. Now, the small ball is first pulled to the position P_{1} in the figure,
making the light string straight and parallel to the field direction, and then the ball is
released from rest. It is known that when the small ball passes the lowest point, its
vertical velocity component becomes zero instantaneously due to the tension in the
string, while the horizontal component remains unchanged (ignoring air resistance).
The tension T in the string when the small ball reaches point P_{2} at the same
height as point P_{1} is ( )
A:2mg
B:3mg
C:4mg
D:5mg
**_Answer_**
B
**_Question_**
As shown in the figure, a bar magnet is placed parallel to the smooth horizontal edge
AB of the table, with a vertical block at end A. A horizontal spring connects the
block and the magnet. Currently, a straight conductor CD, with a downward current,
is placed vertically in front of the magnet and offset towards the South pole with a
small gap between them. Within a short time after placing CD, the following
judgments are correct:
A: The length of the spring will increase
B: The length of the spring will decrease
C: The magnet will move outward on the table, closer to the straight conductor CD
D: The magnet will move inward on the table, away from the straight conductor CD
**_Answer_**
BD
**_Question_**
A group used the illustrated equipment to measure the gravitational
acceleration. The experimental apparatus consists of a base with a
vertical rod marked with a scale, photoelectric timers A and B, a steel
ball, and a net pouch. By measuring the average speed of the ball
between A and B for different displacements, the gravitational
acceleration can be calculated.\nAnswer the following questions:
(1) In the experiment, keep A fixed, move B down the rod, measure the
distance h between A and B and the time t the steel ball takes to cover
this distance. After multiple experiments, a graph of h/t versus t is
plotted as shown. From the graph, it can be seen that the gravitational
acceleration g is related to the slope k of the graph by the equation g =,
and the magnitude of the gravitational acceleration is m/s²;
(2) If another group uses the same experimental setup, keeping B fixed
while moving A up the rod, they (fill in \"can\" or \"cannot\") measure
the gravitational acceleration using the above method;
(3) To reduce the measurement error of the gravitational acceleration,
what methods can be used? (Propose one method).
**_Answer_**
①2k, 9.60~9.80;
②Can;
③The distance between AB should be as large as possible, and the volume of the steel ball should be as small as
possible, etc.
Figure 13: Cases of mechanics in physics part of VisScience.
-----
Figure 14: Cases of optics in physics part of VisScience.
-----
Figure 15: Cases of modern physics in physics part of VisScience.
-----
**_Question_**
In the four images shown in, the one that represents an object in uniform accelerated linear motion is ( )
A B C D
**_Answer_**
C
**_Question_**
As shown in the figure is the v-t graph of an elevator moving upward, the
maximum speed of the object's motion equals m/s, and the height of
uniform ascent equals ( ).
**_Answer_**
From the graph, it can be seen that the elevator first accelerates uniformly upwards, then moves upwards at a
constant speed, and finally decelerates uniformly upwards, with a maximum speed of 6m/s.
The height of the constant speed upward motion h = 6×4m = 24m.
Therefore, the answer is: 6, 24m.
**_Question_**
To ensure orderly and safe highway traffic, many traffic signs are placed along
the roadside.
As shown in the picture, image A is a speed limit sign indicating the maximum
allowed speed is 80 km/h; image B is a route indicator sign, showing the
distance of 100 km to Hangzhou. The physical meanings of the two data
mentioned are ( )
A: 80 km/h is average speed
B: 80 km/h is instantaneous speed
C: 100 km is displacement
D: 100 km is distance
**_Answer_**
Solution: The maximum allowable speed represents the speed at a certain position, which is the instantaneous speed, so
80 km/h refers to the instantaneous speed;
There are still 100 km to Hangzhou, and 100 km is the length of the trajectory, which is the distance. Therefore, BD is
correct, and AC is incorrect.
The answer is: BD
**_Question_**
The value of the area enclosed by the velocity-time graph of a particle's
straight-line motion and the coordinate axes is equal to the
displacement of the particle during that period. This holds true not only
for uniform linear motion and uniformly accelerated linear motion but
also for any variable motion. The figure shows the velocity-time graph
of a certain particle during the 0-4s interval of straight-line motion.
According to the graph, the displacement of the particle during this
period is closest to ( )
A:15m
B:13m
C:11m
D:7m
**_Answer_**
Solution: The area enclosed by the graph line and the time axis is approximately 15 squares, so the displacement is:
x=15×1×1m=15m.
Thus, the answer is: A.
Figure 16: Cases of mechanical motion in physics part of VisScience.
-----
**_Question_**
As shown in the figure, when an electron beam passes through the two magnetic
poles in the direction indicated by the arrow in the figure, its deflection direction is
( )
A: Upward
B: Downward
C: Toward the N pole
D: Toward the S pole
**_Answer_**
B
**_Question_**
As shown in the figure, a positively charged particle enters a uniform magnetic field
horizontally to the right, neglecting gravity. Regarding the motion of this charged
particle after it enters the magnetic field, the following judgments are correct: ( )
A: The particle deflects upwards
B: The particle deflects downwards
C: The particle does not deflect
D: The particle quickly stops moving
**_Answer_**
A
**_Question_**
As shown in the figure, there is a uniform magnetic field with a magnetic induction
intensity B=1T. A wire with a length of L=0.1 m is placed perpendicularly to the
direction of the magnetic field. When a horizontal current of I=0.5A flows to the
right through the wire,
(1) Determine whether the direction of the Ampere force on the wire is vertically
upward or downward;
(2) Calculate the magnitude of the Ampere force F on the wire.
**_Answer_**
Solution: (1)According to the left-hand rule, it can be determined that the direction of the Ampere force on the wire is
vertically upward.
(2)Ampere force on the wire: F=BIL=1×0.5×0.1N=0.05N
**_Question_**
As shown in Figure A, a uniform magnetic field B=0.5T exists in
space, directed vertically downward. MN and PQ are rough, parallel,
long straight rails in the same horizontal plane, with a distance L=0.2m
between them; R is a resistor connected at one end of the rails. The ab
is a conducting rod with a mass of m=0.1kg bridged across the rails.
From time zero, a small motor exerts a horizontal leftward traction
force F on rod ab, causing it to start moving from rest and accelerating
along the rails. During this process, the rod always remains
perpendicular to and in good contact with the rails. Figure B shows the
v-t graph of the rod, where segment OA is a straight line, and AC is a
curve. The small motor reaches its rated power P=4.5W at the end of
12s, after which it maintains constant power. The resistance of other
parts except R is negligible, and g=10m/s^{2}.
(1) Find the acceleration of the conductor rod ab during the 0-12s
interval;
(2) Find the coefficient of kinetic friction between rod ab and the rails
and the value of the resistor R;
(3) Draw the graph of the traction force's power versus time (P-t) on
the answer sheet.
**_Answer_**
0.75m/s^{2} Dynamic friction coefficient is 0.2 Resistance value is 0.4 ohms\nThe graph is a parabola passing
through the origin from 0 to 12 seconds (but the vertex of the parabola is not the origin). After 12 seconds, the power
is constant.
Figure 17: Cases of electromagnetism in physics part of VisScience.
-----
**_Question_**
M and N are two points in the direction of wave propagation in the medium, with a distance s =\"1.5\" m, and their vibration
images are shown in the figure. The possible values for the wave speed are( ):
A:15 m/s
B:7.5 m/s
C:5 m/s
D:3 m/s
**_Answer_**
ACD
**_Question_**
As shown in the figure, the image represents a mechanical wave propagating along
the x-axis at a certain moment. From the figure, it can be seen that the amplitude A
and wavelength λ of this wave are respectively ( )
A:A=2cm,λ=4m
B:A=2cm,λ=8m
C:A=4cm,λ=4m
D:A=4cm,λ=8m
**_Answer_**
According to the definition of amplitude: Amplitude A is the maximum distance the vibrating object moves away
from the equilibrium position, read the maximum value of y as A=2cm. Based on the fact that wavelength equals
the distance between two adjacent wave crests or troughs, read the wavelength λ=4m.
So choose B.
**_Question_**
Starlight Middle School has a 150m×70m playground. Broadcast speakers
S_1 and S_2 are set up at both ends of the playground (as shown in the
picture). Teacher Wu walked around the playground to test the sound and
found that at points A, B, C, and D (which are the midpoints of each side),
the sound was indistinct at two places. These two places are ( ).
A: B and D
B: A and B
C: C and D
D: A and C
**_Answer_**
Solution: From the diagram, it is evident that B and D are equidistant from both sound sources. The sound reaches them
simultaneously without interference, resulting in a louder perception. In contrast, A and C are at noticeably different
distances from the two sound sources, causing interference between the sounds, making them harder to hear clearly.
Therefore, the answer is D.
**_Question_**
There are two vibrating sources A and B on the water surface, vibrating
in exactly the same way. On the perpendicular bisector of the line
connecting A and B, there are three points a, b, and c. At a certain
moment, point a is the convergence point of the crests of the two
waves, and point c is the nearest convergence point of the troughs of
the two waves from point a. Point b is located on the line connecting
points a and c, as shown in the figure. Which of the following
statements is correct? ( )
A: Both a and c are points of constructive interference, and b is a point
of destructive interference.
B: a is a point of constructive interference, and c is a point of
destructive interference.
C: Both a and c are currently points of constructive interference, but
after some time, they will become points of destructive interference.
D: a, b, and c are all points of constructive interference.
**_Answer_**
Solution: From the question, A and B are two sources of vibration with identical conditions. Points a, b, and c lie on
the perpendicular bisector of the line AB, meaning the distances from a, b, and c to the two wave sources are equal,
resulting in a path difference of zero. The vibrations at these three points all reinforce each other.\nTherefore, the
answer is: D
Figure 18: Cases of vibration and waves in physics part of VisScience.
-----
**_Question_**
A vernier caliper with 20 divisions was used to measure the diameter of a small ball during an experiment. The reading is
shown in the figure, and the diameter of the small ball is d = ( ) mm.
**_Answer_**
Solution: The main scale reading is 14mm, and the vernier scale reading is
0.05×3=0.15mm, so the final reading is 14.15mm.
**_Question_**
Using the ruler shown in the figure to measure the length of the object, the smallest
scale of the ruler is millimeters, and the measured length of the object is centimeters.
**_Answer_**
Solution: As shown in the diagram, the smallest division value of the ruler is 1mm. The initial reading of the
measured object is: 6.00cm, and the final reading is: 8.53cm. Hence, the length of the object is: 2.53cm.
\nTherefore, the answer is: 1; 2.53.
**_Question_**
In the experiment to measure the rated power of a 3.8V bulb, it is known that the
rated power of the bulb is approximately 1.5W. Besides the bulb, ammeter,
voltmeter, switch, and wires, the teacher also provides:
A. 6V power supply;
B. 4V power supply;
C. A variable resistor of "5Ω 2A";
D. A variable resistor of "100Ω 0.1A".
(1) To accurately measure the rated power of the bulb, the variable resistor you
choose is, and the power supply you choose is (fill in the letter).
(2) When the bulb is glowing normally, the ammeter pointer is shown as in Figure.
Please calculate the actual rated power W and the resistance of the bulb Ω.
**_Answer_**
(1) C_, B__;
(2) 1.67, 8.6 .
**_Question_**
Measure the resistance value of an unknown resistor.
(1) A student first roughly measures the resistance using a multimeter. The
multimeter's switch is set to the ×10Ω range. After zero adjustment, the red and black
probes are connected to the resistor's terminals, and the pointer reading is as shown in
the figure. The measured resistance value is ________Ω.
(2) The student then plans to accurately measure the resistance using the VA method.
The provided experimental equipment includes: 8V DC power supply; voltmeter
(0-10V, internal resistance about 20kΩ); ammeter (0-50mA, internal resistance about
10Ω); sliding rheostat (0-20Ω, 1A); switch, and wires. Based on the experimental
requirements and the provided equipment, refer to the partially incomplete physical
circuit below and draw the experimental circuit diagram in the dashed box below,
completing the unconnected wires.
(3) In the experiment mentioned in (2), after connecting the circuit and closing the
switch, the student found that both the ammeter and voltmeter had no readings. The
student used a multimeter to check for circuit faults. The operations were as follows:
The multimeter was set to the DC voltage range, and the red and black probes were
connected to: the positive and negative terminals of the power supply; the two ends of
the rheostat's resistance wire; between the "-" terminal of the ammeter and the "+"
terminal of the voltmeter. The multimeter's pointer deflected in all cases, indicating
that the wire connected between ____________ might have broken.
(4) In the experiment, the student moves the rheostat's sliding head, records multiple
sets of ammeter and voltmeter readings (U, I), and plots the U-I curve on graph paper.
In this experiment, the measured value is ________ the true value. (fill in ">", "=" or
"<")
**_Answer_**
(1) 200
(2) As shown in the figure
(3) Connect the ammeter and voltmeter
(4) <
Figure 19: Cases of comprehensive experiments and methods in physics part of VisScience.
-----
**_Question_**
Using two identical electric heaters to heat substances A and water, both with a mass of 2kg, the relationship between their
temperature and time is shown in the diagram. Based on this, determine the amount of heat absorbed by substance A in 10
minutes ( ).
A:5.04×10^5 J
B:4.2×10^5 J
C:2.52×10^5 J
D:Insufficient conditions, cannot be calculated
**_Answer_**
C
**_Question_**
In the experiment of exploring the "boiling law of water," Xiaoming recorded the
temperature every 30s after the water temperature rose to 90°C, and then plotted the
temperature versus time graph, as shown in the figure. If the thermometer used by
Xiaoming is accurate, it can be concluded from the graph that the boiling
temperature of water is ℃, which indicates that the atmospheric pressure at that time
is 1 standard atmosphere (choose "greater than," "less than," or "equal to").
**_Answer_**
From the figure, it can be seen that the boiling point of water is 98℃. The boiling point of water under standard
atmospheric pressure is 100℃. The boiling point of a liquid decreases as the pressure decreases, therefore the
atmospheric pressure at that time is lower than 1 standard atmospheric pressure.
So the answer is: 98; lower.
**_Question_**
For the experiment shown in the figure, the following statements are correct ():
A: In the first diagram, the copper tube's temperature increases when rubbed with a
rubber band. This shows that the internal energy of the copper tube increases
through work done on it.
B: In the second diagram, the steam pushes out the rubber stopper, indicating that
the steam does work on the rubber stopper, causing the internal energy of the steam
to increase.
C: In the third diagram, ink diffuses significantly more in hot water than in cold
water, demonstrating that the intensity of molecular random motion is related to
temperature.
D: In the fourth diagram, after removing the glass plate, the gases in the two bottles
gradually mix evenly, which is a diffusion phenomenon.
**_Answer_**
ACD
**_Question_**
Some students in a group conducted an experiment to "compare the
heat absorption capacity of different substances" using the setup shown
in the figure.
(1) When designing the experimental plan, the following controlled
variables need to be determined. Which one do you think is
unnecessary?
A. Use the exact same heating method
B. Use the same amount of alcohol in the alcohol lamps
C. Use the same mass of water and another liquid
D. Use the same container to hold water and another liquid
(2) When heated to a certain point, the water starts boiling. The
temperature at this moment is shown in Figure C. Therefore, the
boiling point of water is ℃, which indicates that the atmospheric
pressure during the experiment is (choose "greater than", "less than", or
"equal to") one standard atmosphere.
(3) The other liquid does not boil at the corresponding moment, but the
reading of the thermometer is much higher than the water temperature.
Based on this phenomenon, the preliminary conclusion of this
experiment is: the heat absorption capacity of different substances is
(choose "the same" or "different").
**_Answer_**
(1) B (2) 97, less than (3) different
Figure 20: Cases of thermodynamics in physics part of VisScience.
-----
Figure 21: Cases of chemical experiment in chemistry part of VisScience.
-----
Figure 22: Cases of chemical reaction in chemistry part of VisScience.
-----
Figure 23: Cases of inorganic chemistry in chemistry part of VisScience.
-----
Figure 24: Cases of organic chemistry in chemistry part of VisScience.
-----
Figure 25: Cases of electrochemistry in chemistry part of VisScience.
-----
Figure 26: Cases of substance composition in chemistry part of VisScience.
-----
Figure 27: Cases of chemical equilibrium in chemistry part of VisScience.
-----
**_Question_** **_Plane Geometry_**
In square ABCD, point P is a moving point on AB (not coinciding with A or B). Diagonals AC and BD
intersect at point O. Perpendiculars are drawn from point P to AC and BD, intersecting AC and BD at
points E and F respectively, and intersecting AD and BC at points M and N respectively. The following
conclusions are given:
① - APE≌△AME;
② PM + PN = AC;
③ PE² + PF² = PO²;
④ - POF∽△BNF;
⑤ When △PMN∽△AMP, point P is the midpoint of AB.
How many of these conclusions are correct?
A. 2
B. 3
C. 4
D. 5
**_Ground Truth:D_**
**_GPT-4o_**
Considering the given figure and information, let's analyze the correctness of each conclusion:
1. △APE ≌ - AME:
- This is correct because △APE and △AME are right triangles sharing the hypotenuse ( AE ) and angles at ( A ). Hence, they are congruent by the
Hypotenuse-Leg (HL) congruence theorem.
2. PM + PN = AC:
- This is difficult to confirm without more information about their exact lengths. So, we will check integer values later.
3. PE² + PF² = PO²:
- This theorem is derived from the fact that the two diagonals of a square intersect at right angles and create the right-angle triangle with hypotenuse
( PO ). Hence, PE² + PF² = PO² holds as per the Pythagorean theorem.
4.△POF ∽ - BNF:
- This holds true because these triangles are similar by AA similarity, sharing angle POF and having corresponding angles at F and N.
5. When △PMN ∽ - AMP, point P is the midpoint of AB:
- This is correct because for the triangles PMN and AMP to be similar, the ratios of corresponding sides must be equal, which will be true when P is
the midpoint of AB.
Therefore, the correct conclusions are ①, ③, ④, and ⑤; thus, there are 4 correct conclusions.
Answer:
C. 4
**_Question_** **_Functions and Equations_**
As shown in the diagram, in the Cartesian coordinate system, a moving point
starts from the origin O and continuously moves in the directions up, right,
down, right. Each time it moves one unit, it reaches points A₁ (0, 1), A₂ (1, 1),
A₃ (1, 0), A₄ (2, 0), … Therefore, the coordinates of point A₁₇ are ( )
A. (8,0)
B. (8,1)
C. (9,0)
D. (9,1)
**_Ground Truth:D_**
**_GPT-4o_**
Let's analyze the movement pattern of the point:
1. A₁ (0, 1)
2. A₂ (1, 1)
3. A₃ (1, 0)
4. A₄ (2, 0)
Based on these four steps, it repeats as a cycle:
- Moving up: y increases by 1.
- Moving right: x increases by 1.
- Moving down: y decreases by 1.
- Moving right: x increases by 1.
So every 4 moves, the point completes one cycle. Each cycle results in an increase of the x-coordinate by 2 and leaves the y-coordinate either at 0 or 1
depending on the position within the cycle.
Since we need to find A₁₇, we first determine which part of the cycle the 17th move falls into:
- Cycle length: 4 moves
- Number of completed cycles in 16 moves: floor(16 / 4) = 4
- Residual moves after 4 cycles: 17 - 16 = 1
Each complete cycle increases x by 2:
- After 4 complete cycles: 4 * 2 = 8
The 17th move is the first move in the 5th cycle:
- The point moves up from (8, 0) to (8, 1)
Thus, the coordinates of A₁₇ are (8, 1).
The correct answer is:
B. (8, 1)
Figure 28: A question, the standard answer, and GPT-4o’s correct response from the mathematical
part in VisScience.
-----
**_Question_** **_Algebraic Operations_**
Observe the following figure, it shows connecting the midpoints of the
three sides of a triangle to form 4 smaller triangles, then removing the
central small triangle (as shown in Figure 1). Repeat this process for the
remaining triangles ..., and continue this method (as shown in Figures 2,
3, ...). The number of triangles removed in Figure 6 is ( )
A. 121
B. 362
C. 364
D. 729
**_Ground Truth:C_**
**_GPT-4o_**
To determine the number of triangles removed in Figure 6, we need to analyze the pattern of the triangles removed in each iteration.
1. Figure 1:
- We start with one triangle and remove the central triangle: 1 triangle is removed.
2. Figure 2:
- Each of the remaining 3 smaller triangles has a central triangle removed: 3 more triangles are removed.
3. Figure 3:
- Each of the remaining 9 smaller triangles has a central triangle removed: 9 more triangles are removed.
We can see the pattern forming:
- Figure 1: 3⁰ = 1 triangle is removed.
- Figure 2: 3¹ = 3 triangles are removed.
- Figure 3: 3² = 9 triangles are removed.
For any Figure n:
- The number of triangles removed in each iteration after the first is 3ⁿ⁻¹.
So for Figure 6:
- The total number of triangles removed is 3⁰ + 3¹ + 3² + 3³ + 3⁴ + 3⁵.
This is a geometric series where the sum is:
S = ∑(k=0 to 5) 3ᵏ
We know the formula for the sum of a geometric series Sₙ = a(rⁿ - 1) / (r - 1), where a = 1, r = 3, and n = 6:
S₆ = (3⁶ - 1) / (3 - 1) = (729 - 1) / 2 = 728 / 2 = 364
Thus, the number of triangles removed in Figure 6 is:
C. 364
**_Question_** **_Functions and Equations_**
A mall is running a promotion on a certain product. The cost price of each unit
is 3 yuan, and the selling price is 8 yuan. Starting from the 20th unit sold each
day, the price is halved. The mall has recorded the sales of this product over the
past 10 days, as shown in the figure. Let x be the daily sales volume, and y be
the daily profit from the sale of this product. Among the days where the daily
profit is no less than 96 yuan, if we randomly select 2 days, the probability that
the daily profit on both selected days is exactly 97 yuan is
A. 1/9
B. 1/10
C. 1/5
D. 1/8
**_Ground Truth:B_**
**_GPT-4o_**
To solve this problem, we'll first calculate the profit based on the sales volume and then determine the probability of selecting specific days given the
conditions.
Step-by-Step Analysis
1. Understanding the Sales and Pricing
- Cost Price per unit: 3 yuan
- Selling Price per unit: 8 yuan
- Starting from the 20th unit: Price is halved, thus selling price = 4 yuan.
2. Profit Calculation for Different Sales Volumes:
Let x be the daily sales volume. The profit y can be calculated as follows:
- For x ≤ 19:
y = (8 - 3) × x = 5x
- For x ≥ 20:
- Revenue from first 19 units: 19 × (8 - 3) = 95 yuan
- Revenue from units beyond 19:
if x = 20: (8 - 3)×19 + (4 - 3)×(20 - 19) = 95 + 1 = 96 yuan
if x = 21: 95 + (4 - 3)×(21 - 19) = 95 + 2 = 97 yuan
So the profit formulas are:
- y = 5x for x ≤ 19
- y = 95 + (x - 19) for x ≥ 20
3. Determine the Qualifying Days for Given Profit
Based on the given sales distribution in the graph:
- x = 18: Profit = 5 × 18 = 90 yuan
- x = 19: Profit = 5 × 19 = 95 yuan
- x = 20: Profit = 96 yuan
- x = 21: Profit = 97 yuan
4. Frequency of Each Sales Volume:
From the graph:
- 18 units sold: 1 day (profit = 90 yuan)
- 19 units sold: 4 days (profit = 95 yuan)
- 20 units sold: 3 days (profit = 96 yuan)
- 21 units sold: 2 days (profit = 97 yuan)
5. Determine Days with Profit No Less Than 96 Yuan:
- Profits of 96 yuan: 3 days (for sales = 20 units)
- Profits of 97 yuan: 2 days (for sales = 21 units)
Total number of qualifying days (profit ≥ 96 yuan): 3 + 2 = 5 days
6. Probability Calculation
We need the probability that on two randomly selected days, both have exactly 97 yuan profit.
Number of ways to choose 2 days out of these 5:
C(5, 2) = 5 × 4 / 2 × 1 = 10
Number of ways to choose the 2 days where both have profit of 97 yuan:
C(2, 2) = 1
Thus, the probability is:
1 / 10
Figure 29: A question, the standard answer, and GPT-4o’s correct response from the mathematical
part in VisScience.
-----
**_Question_** **_Algebraic Operations_**
As shown in the figure, in the regular triangular prism ABC-A₁B₁C₁, point M is a moving point
on the lateral edge AA₁. It is known that the maximum area of triangle BCM is 2√3, and the
maximum value of the dihedral angle M-BC-A is π/3. Then the volume of the triangular prism
is ( )
A. 3√3
B. 2√3
C. √3
D. 3√2
**_Ground Truth:A_**
**_GPT-4o_**
Let the side length of the equilateral triangle ABC be a, and the height of the prism (length of AA₁) be h.
First, consider the maximum area of triangle BCM. Note that triangle ABC is an equilateral triangle with area A_{ABC} = (√3/4) * a^2. To determine
the point M when the area of triangle BCM is maximized, M should be located at A₁ or A because these are the endpoints of AA₁ (as M moves).
Choosing M = A₁ maximizes the sub-triangle’s base height product. The area of triangle BCM at M = A₁ is:
A_{BCM} = (1/2) * BC * A₁B * sin(θ)
Given that the maximum area of triangle BCM is 2√3, and BC = a in an isosceles triangle, we find the height as:
(1/2) * a * h = 2√3
Thus:
a * h = 4√3 which implies h = 4√3 / a
Next, the maximum dihedral angle M-BC-A is π/3 when M is midway between A and A₁, i.e., the face BCM forms an equal inclination with base BC.
Given a and height h, we find the volume V of the triangular prism ABC-A₁B₁C₁.
V = A_{ABC} * h = (√3/4) * a^2 * (4√3 / a) = √3 * a^2 * (√3 / a) = 3√3
Therefore, the volume of the triangular prism is 3√3
**_Question_** **_Functions and Equations_**
As shown in the figure, there are 8 cards, each labeled with a natural number
from 1 to 8. Select 3 cards such that the sum of the numbers on these 3 cards is
9. How many different ways are there to select these cards (order does not
matter)?
**_Ground Truth:3_**
**_GPT-4o_**
To solve the problem, we need to find all sets of three different numbers from 1 to 8 whose sum is 9. Let's list all possible combinations:
1. Starting with the number 1:
- 1 + 2 + 6 = 9
- 1 + 3 + 5 = 9
2. Starting with the number 2:
- 2 + 3 + 4 = 9
3. Starting with the number 3:
- Combinations involving 3 and numbers greater than or equal to 4 will exceed a sum of 9.
4. Numbers greater than 3:
- Combinations of 4 or greater with remaining available numbers will not achieve a sum of 9 without exceeding it.
Thus, the possible combinations that satisfy the condition are:
- 1, 2, 6
- 1, 3, 5
- 2, 3, 4
Each combination involves 3 unique cards and the order does not matter.
Hence, there are 3 different ways to select these cards such that their sum equals 9.
Figure 30: A question, the standard answer, and GPT-4o’s correct response from the mathematical
part in VisScience.
-----
**_Question_**
**_Electromagnetism_**
As shown in the figure, a positively charged particle enters a uniform magnetic field horizontally to the right, neglecting
gravity. Regarding the motion of this charged particle after it enters the magnetic field, the following judgments are
correct: ( )
A: The particle deflects upwards
B: The particle deflects downwards
C: The particle does not deflect
D: The particle quickly stops moving
**_Ground Truth:A_**
**_GPT-4o_**
To determine the motion of a positively charged particle entering a uniform magnetic field, we use the right-hand rule. The magnetic field (B) is represented by crosses, indicating
that it is directed into the page. The velocity (v) of the particle is to the right, as indicated by the arrow.
The right-hand rule for the force on a positively charged particle states:
1. Point your fingers in the direction of the velocity (v), to the right.
2. Curl your fingers in the direction of the magnetic field (B), into the page.
3. Your thumb then points in the direction of the force (F).
If you follow these steps, your thumb will point upwards. This indicates that the positively charged particle will experience a force that deflects it upwards.
Therefore, the correct judgment is:
A: The particle deflects upwards
**_Question_**
**_Mechanics_**
As shown in the figure, the stiffness of the light spring is k, the ball's weight is G, and the ball is at position A in
equilibrium. Now, applying a force F to compress the ball to position B, which shortens the spring by x, the spring
force at this time is ( )
A: kx
B: kx+G
C: G-kx
D: None of the above
**_Ground Truth:B_**
**_GPT-4o_**
To determine the spring force at the compressed position B, we need to consider both the equilibrium condition and the additional compression caused by the force F.
1. Equilibrium at Position A:
- At position A, the spring force is balancing the weight of the ball.
- The spring force k * x_A = G (where x_A is the initial compression due to the weight G).
2. Additional Compression to Position B:
- Let's denote the additional compression as x_B = x (given in the problem).
- The total compression at position B is x_A + x.
3. Spring Force at Position B:
- The additional spring force due to the additional compression x is k * x.
- Therefore, the total spring force F_spring at position B is the initial spring force plus the additional spring force:
F_spring = k * (x_A + x)
- From equilibrium at position A, k * x_A = G, so:
F_spring = k * x_A + k * x = G + k * x
So, the correct answer is:
B: kx + G
Figure 31: A question, the standard answer, and GPT-4o’s correct response from the physics part in
VisScience.
-----
**_Question_**
**_Thermodynamics_**
As shown in the figure, an ideal gas of a certain mass starts from state a, passes through an isochoric process ab to reach state b,
then goes through an isothermal process bc to reach state c, and finally returns to state a through an isobaric process ca. The correct
statements are _______
A. The internal energy of the gas increases during process ab.
B. Work is done on the gas by the surroundings during process ca.
C. The gas does work on the surroundings during process ab.
D. The gas absorbs heat from the surroundings during process bc.
E. The gas absorbs heat from the surroundings during process ca.
**_Ground Truth:ABD_**
**_GPT-4o_**
To analyze the behavior of the ideal gas as it undergoes the cyclic process represented in the diagram, let's consider each process individually:
1. Isochoric process ab:
- In an isochoric process, volume remains constant.
- Since volume doesn’t change, no work is done (W = 0).
- The pressure increases from a to b. According to the first law of thermodynamics (ΔU = Q - W), with W = 0, the internal energy change (ΔU) is determined by the heat added to
the system (Q). For an ideal gas, ΔU is directly proportional to the change in temperature. Therefore, the internal energy increases as the gas is heated (as T_b > T_a).
2. Isothermal process bc:
- In an isothermal process, temperature remains constant.
- For an ideal gas, in an isothermal expansion, work is done by the gas as it expands. According to the first law of thermodynamics, since ΔU = 0 for an isothermal process, the heat
absorbed (Q) is equal to the work done (W). Therefore, the gas absorbs heat from the surroundings.
3. Isobaric process ca:
- In an isobaric process, pressure remains constant.
- The volume decreases from V_c to V_a, which means work is done on the gas (work done on the gas is positive). Since the volume decreases, the gas releases heat to the
surroundings.
Given these observations, the correct statements are:
A. The internal energy of the gas increases during process ab. (True, as discussed in the isochoric process analysis)
B. Work is done on the gas by the surroundings during process ca. (True, the gas is being compressed)
D. The gas absorbs heat from the surroundings during process bc. (True, as heat absorbed equals work done during isothermal expansion)
So the correct answers are:
- A, B, D
**_Question_**
**_Optics_**
As shown in Figure 6, it is a schematic diagram for studying the double-slit interference of light. There are
two slits S1 and S2 on the screen. When the two waves emitted from S1 and S2 reach the screen, they
create interference fringes. The wavelength of the incident laser is known as λ. The distance from point P
on the screen to slits S1 and S2 is equal. If the bright fringe at point P is labeled as the 0th bright fringe,
counting upward from P, the bright fringe adjacent to the 0th fringe is the 1st bright fringe, and the one
adjacent to the 1st bright fringe is the 2nd bright fringe. At point P1, the bright fringe is exactly the 10th
bright fringe. Let the length of the line S1P1 be δ1 and the length of the line S2P1 be δ2, then δ2 - δ1
equals ( ) Figure 6
A. 9λ
B. 10λ
C. 11λ
D. 18λ
**_Ground Truth:B_**
**_GPT-4o_**
In double-slit interference, the difference in path lengths between the two waves arriving at a point of constructive interference is given by nλ, where n is an integer (the order of
the bright fringe) and λ is the wavelength of the incident light.
Given:
- Point P corresponds to the 0th bright fringe (where the path difference δ2 - δ1 = 0).
- Point P1 corresponds to the 10th bright fringe.
At P1, the path difference δ2 - δ1 = nλ, where n=10.
Therefore,
δ2 - δ1 = 10λ
Thus, the correct answer is:
δ2 - δ1 equals 10λ
Choice B. 10λ.
Figure 32: A question, the standard answer, and GPT-4o’s correct response from the physics part in
VisScience.
-----
**_Question_**
**_Modern Physics_**
The energy levels of a hydrogen atom are shown in the diagram. During the process where a group of hydrogen atoms in the
n=4 energy level return to the n=1 state ( ),
A: 3 different frequencies of photons are emitted
B: 6 different frequencies of photons are emitted
C: The maximum energy of the emitted photons is 12.75 eV, and the minimum energy is 0.66 eV
D: The emitted light can cause the photoelectric effect in metals with a work function of 13.0 eV
**_Ground Truth:BC_**
**_GPT-4o_**
To determine the correct answer, we need to analyze the energy transition levels and the resulting photon emissions when a hydrogen atom transitions from the n=4 state to the
n=1 state.
Step 1: Calculate Possible Transitions
The possible transitions from n=4 down to n=1 are:
1. n=4 → n=3
2. n=4 → n=2
3. n=4 → n=1
4. n=3 → n=2
5. n=3 → n=1
6. n=2 → n=1
This gives us a total of 6 different photon frequencies.
Step 2: Calculate Energy Differences
The energy of each energy level is given by the formula E_n = -13.6 eV / n^2:
1. Transition n=4 to n=1:
ΔE = E_1 - E_4 = -13.6 eV - (-0.85 eV) = 12.75 eV
2. Transition n=3 to n=1:
ΔE = E_1 - E_3 = -13.6 eV - (-1.51 eV) = 12.09 eV
3. Transition n=2 to n=1:
ΔE = E_1 - E_2 = -13.6 eV - (-3.40 eV) = 10.20 eV
4. Transition n=4 to n=3:
ΔE = E_3 - E_4 = -1.51 eV - (-0.85 eV) = 0.66 eV
5. Transition n=4 to n=2:
ΔE = E_2 - E_4 = -3.40 eV - (-0.85 eV) = 2.55 eV
6. Transition n=3 to n=2:
ΔE = E_2 - E_3 = -3.40 eV - (-1.51 eV) = 1.89 eV
Step 3: Analyze Photon Emissions
A: 3 different frequencies of photons are emitted.
- Incorrect. We found 6 different transitions, each with a unique frequency.
B: 6 different frequencies of photons are emitted.
- Correct. We calculated 6 different energy transitions.
C: The maximum energy of the emitted photons is 12.75 eV, and the minimum energy is 0.66 eV.
- Correct. The maximum energy transition is from n=4 to n=1 (12.75 eV), and the minimum energy transition is from n=4 to n=3 (0.66 eV).
D: The emitted light can cause the photoelectric effect in metals with a work function of 13.0 eV.
- Incorrect. The highest energy photon is 12.75 eV, which is less than the work function of 13.0 eV, and thus insufficient to cause the photoelectric effect.
Therefore, the correct statements are:
B: 6 different frequencies of photons are emitted.
C: The maximum energy of the emitted photons is 12.75 eV, and the minimum energy is 0.66 eV.
**_Question_**
**_Mechanical Motion_**
The x–t graph of object A and the v–t graph of object B are shown in the figure. The motion situations of
these two objects are ( )
A: The direction of object A's motion remains unchanged throughout the 6s, and its total displacement is 4m.
B: Object A moves back and forth during the entire 6s, and its total displacement is zero.
C: Object B moves back and forth during the entire 6s, and its total displacement is zero.
D: The direction of object B's motion remains unchanged throughout the 6s, and its total displacement is 4m.
**_Ground Truth:AC_**
**_GPT-4o_**
Given the graphs:
For object A:
- The x-t graph shows that the position of object A changes linearly from -2 m to 2 m over 6 s.
- This means object A's displacement is 4 m over the 6 seconds.
- Since the graph is a straight line without reversing direction, the direction of motion remains unchanged.
For object B:
- The v-t graph shows that the velocity of object B changes linearly from -2 m/s to 2 m/s over 6 s.
- This implies that object B starts moving in the negative direction, then slows down to a stop at 3 seconds, and then accelerates in the positive direction until reaching 2 m/s at 6
seconds.
- Considering the symmetry of the v-t graph and that the average velocity over 6 seconds is 0, object B moves back and forth and its total displacement is zero.
Based on these analyses, the correct choices are:
- A: The direction of object A's motion remains unchanged throughout the 6 s, and its total displacement is 4 m.
- C: Object B moves back and forth during the entire 6 s, and its total displacement is zero.
Figure 33: A question, the standard answer, and GPT-4o’s correct response from the physics part in
VisScience.
-----
**_Question_**
**_Vibration and Waves_**
A simple harmonic transverse wave is propagating along the x-axis. The waveform at a certain
moment is shown in the figure, where a, b, and c are three particles, and a is moving upward. From
this, it can be inferred that ( )
A: The wave is propagating in the positive direction of the x-axis
B: c is moving upward
C: After this moment, b will reach the equilibrium position before c
D: After this moment, b will reach the farthest position from the equilibrium point before c
**_Ground Truth:AC_**
**_GPT-4o_**
Given the waveform and the movement of the particles, let's analyze the options:
- Option A: The wave is propagating in the positive direction of the x-axis**.
- Since particle a is moving upward, this implies that the wave is traveling towards the right (positive x-axis), because for a wave moving in the positive x-direction, the points
on the wave ahead of a (like b and c) will be going downward if they are past the crest and moving towards equilibrium.
- This is correct.
- Option B: c is moving upward**.
- Considering the waveform and its propagation direction, particle c is near the lowest point, and as the wave propagates to the right, c would be ascending.
- This is correct.
- Option C: After this moment, b will reach the equilibrium position before c**.
- Particle b is currently below the equilibrium line and moving upward, meaning it will pass the equilibrium line before c, which is still moving upward towards the equilibrium
position.
- This is correct.
- Option D: After this moment, b will reach the farthest position from the equilibrium point before c**.
- Particle b is below the equilibrium position but on its way up to equilibrium; it won't reach the farthest position next. Particle c is at a trough and moving upwards, thus it has
just recently been at the farthest position.
- This is incorrect.
Therefore, the correct answers are A, B, and C.
**_Question_**
**_Comprehensive Experiments and Methods_**
<252773_resized.png> Use the circuit diagram shown in Figure A to study the volt-ampere characteristics
of bulb L (2.4V, 1.0W) and measure the resistance value of the bulb when it is operating normally at its
rated voltage, verifying the accuracy of its label. (1) Before closing switch S, the slider on the variable
resistor should be placed at the ▲ end. (Select "a" or "b") (2) The main reason for the curve in the U—I
graph made after the experiment, as shown in Figure B, is: ▲.
**_Ground Truth:_**
**(1) a**
**(2) When the temperature rises, the resistance of the filament increases.**
**_GPT-4o_**
(1) Before closing switch S, the slider on the variable resistor should be placed at the a end.
(2) The main reason for the curve in the U—I graph made after the experiment, as shown in Figure B, is: as the voltage increases, the filament's temperature increases, causing its
resistance to increase, which leads to a non-linear relationship between voltage (U) and current (I). Thus, the curve shows the non-ohmic behavior of the filament bulb as it heats
up.
Figure 34: A question, the standard answer, and GPT-4o’s correct response from the physics part in
VisScience.
-----
**_Question_** **_Chemical Experiment_**
You are provided with pure zinc, pure copper sheets, 500mL of 0.2 mol·L^{-}^{1} H_{2}
SO_{4} solution, wires, and a 1000mL measuring cylinder. Using the device shown in the
figure, try to determine the amount of electrons passing through the wire when zinc reacts with
dilute sulfuric acid over a certain period of time. (1) As shown in the figure, the device is wellsealed, and the 1000mL measuring cylinder is filled with water. At the beginning of the
experiment, the first thing to do is ______________________________________. (2) The
material of electrode a is __________, and its electrode reaction equation is
______________________. The material of electrode b is __________, and its electrode
reaction equation is __________________. (3) When 672mL of gas is collected in the
measuring cylinder (converted to standard conditions), the amount of electrons passing through
the wire is __________.
**_Ground Truth:_**
**_(1) Connect the electrodes a and b with a wire._**
**_(2) Pure zinc sheet: Zn - 2e⁻ → Zn²⁺; Pure copper sheet: 2H⁺ + 2e⁻ → H₂↑_**
**_(3) 0.06 mol_**
**_GPT-4o_**
(1) At the beginning of the experiment, the first thing to do is connect the zinc and copper electrodes with the wire to close the circuit.
(2)
- The material of electrode a is zinc, and its electrode reaction equation is Zn → Zn²⁺ + 2e⁻ (oxidation reaction).
- The material of electrode b is copper, and its electrode reaction equation is H⁺ + 2e⁻ → H₂ (reduction reaction).
(3) When 672mL of gas is collected in the measuring cylinder (converted to standard conditions), the amount of electrons passing through the wire is
0.06 mol.
Calculation:
- The gas collected in the measuring cylinder is hydrogen gas (H₂).
- At standard conditions (STP, ideal gas conditions), 1 mole of any gas occupies 22.4 L (22,400 mL).
Number of moles of H₂ = 672 mL / 22,400 mL/mol = 0.03 mol
- According to the reaction at copper electrode (2H⁺ + 2e⁻ → H₂):
- 1 mole of H₂ is produced by 2 moles of electrons (e⁻).
Moles of electrons = 0.03 mol H₂ × 2 mol e⁻ / mol H₂ = 0.06 mol e⁻
**_Question_** **_Chemical Equilibrium_**
As shown in the figure, the partition K can move left and right. 2 mol of A and
1 mol of B are added to container A, and 2 mol of C and 1 mol of He are added
to container B. At this time, K stops at position 0. The reaction occurs: 2A(g) +
B(g) ⇌ 2C(g), after reaching equilibrium and returning to temperature. The
following statements are incorrect ( )
A. After reaching equilibrium, the partition K finally stops between the left
scale 0 and 2.
B. If K stops at position 1 on the left at equilibrium, then the piston stops at
position 6 on the right.
C. When equilibrium is reached, the amount of substance B in container A is
less than the amount of substance B in container B.
D. Whether the partition K slides or not can determine whether the reactions on
both sides have reached equilibrium.
**_Ground Truth:B_**
**_GPT-4o_**
Let's analyze the given statements based on the given chemical reaction and the moving partition:
1. Statement A: After reaching equilibrium, the partition K finally stops between the left scale 0 and 2.
2. Statement B: If K stops at position 1 on the left at equilibrium, then the piston stops at position 6 on the right.
3. Statement C: When equilibrium is reached, the amount of substance B in container A is less than the amount of substance B in container B.
4. Statement D: Whether the partition K slides or not can determine whether the reactions on both sides have reached equilibrium.
To determine which statements are incorrect, we need to analyze the behavior of the system at equilibrium.
- Statement A: The position of the partition K is determined by the pressures of the gases in each container. As the reaction proceeds toward equilibrium,
the number of moles of gases changes, affecting the pressures. Given that PV = nRT, if the pressure equalizes and considering the initial setup, K may
indeed stop between the left scale 0 and 2, as the reaction may shift the equilibrium position closer to balancing the pressures more on the right side with
more moles of gas moving to the right-hand side of the equation (more moles of C produced).
- Statement B: This statement seems to misinterpret the resulting pressures that would balance the partition. If K stops at position 1 on the left, it would
be incorrect to state a specific stopping point on the right without more detailed information about the volume and pressure relationship between the
partitions Z and the rightmost compartment. Thus, this statement is likely to be incorrect due to its specificity without supporting information.
- Statement C: As the reaction proceeds, A and B are consumed to produce C. At equilibrium, B will be partially consumed. However, because B does
not exclusively move between the containers, it isn't guaranteed that more B will be in container B (right) compared to container A. Therefore, this
statement is not necessarily accurate.
- Statement D: If the partition slides, it generally indicates that there is a pressure differential due to reaction changes or conditions not yet reaching
equilibrium. Once equilibrium is reached, the partition would stop moving. Hence, this statement holds true.
Thus, the incorrect statements are:
1. Statement B is incorrect: It wrongly assumes a direct specific correlation between stopping points that cannot be inferred without more data.
2. Statement C is incorrect: It makes an unqualified assumption about the relative amounts of B, which cannot be universally inferred without specific
context.
So, the correct answer is B and C. But since only one option should be selected based on the given instruction (choose the incorrect statement), the
appropriate choice is B.
Figure 35: A question, the standard answer, and GPT-4o’s correct response from the chemistry part
in VisScience.
-----
**_Question_** **_Electrochemistry_**
Using inert electrodes to electrolyze a certain amount of copper sulfate solution,
the experimental setup is shown in Figure 1. The experimental data during the
electrolysis process is shown in Figure 2. The horizontal axis represents the
amount of substance of electrons transferred during electrolysis, and the vertical
axis represents the total volume of gas produced during electrolysis (standard
conditions). The following statements are correct:
A: During electrolysis, red substance first appears on the surface of electrode b,
followed by the production of bubbles.
B: The reaction equations at electrode a are: 2H⁺+2e⁻→H₂↑ and
4OH⁻→4e⁻→2H₂O+O₂↑.
C: The segment 0~P of the curve represents the volume change of the mixed
gases H₂ and O₂, and the segment P~Q represents the volume change of O₂.
D: The average molar mass of the mixed gas collected from the beginning to
point Q is 17 g·mol⁻¹.
**_Ground Truth:D_**
**_GPT-4o_**
Let's analyze each of the statements based on the provided figures and the general principles of electrolysis of copper sulfate solution using inert
electrodes:
Figure Analysis:
1. Experimental Setup (Figure 1):
- Electrode a is connected to the power source as the anode (+).
- Electrode b is the cathode (-).
2. Graph (Figure 2): Shows the relationship between the volume of gas produced and the amount of substance of electrons (n(e⁻)).
- Between 0 and P, there's a linear increase in gas volume.
- From P to Q, the increase continues but the gas volume change behavior might differ.
Electrochemical Reactions:
At the Cathode (Reduction at Electrode b):
- Copper (II) ions gain electrons to deposit as copper: Cu²⁺ + 2e⁻ → Cu(s)
- Only after Cu²⁺ is completely reduced, H⁺ can be reduced to hydrogen gas: 2H⁺ + 2e⁻ → H₂(g)
- At the Anode (Oxidation at Electrode a):
- Water molecules get oxidized (since sulfate ions are not easily oxidized) producing oxygen gas:
2H₂O → O₂(g) + 4H⁺ + 4e⁻
- Hydroxide ions can also oxidize to oxygen gas and water:
4OH⁻ → 2H₂O + O₂(g) + 4e⁻
Statement Analysis:
1. Statement A:
- The cathode b will first accumulate a red substance (copper metal) before any other processes.
- Bubbles (hydrogen gas) will only form after substantial copper ion reduction.
Incorrect: This statement is not accurate.
2. Statement B:
- At the anode a, water oxidation occurs as described:
2H₂O → O₂(g) + 4H⁺ + 4e⁻
- Hydroxide ions can also undergo oxidation producing oxygen and water:
4OH⁻ → 2H₂O + O₂(g) + 4e⁻
Incorrect: The assessment that 2H⁺ + 2e⁻ → H₂ happens at the anode is incorrect (water splitting mainly occurs forming O₂).
3. Statement C:
- The graph's initial segment (0 to P) can be attributed to the combined volume changes of H₂ (if produced in minor quantities) and O₂ gases.
- From P to Q likely shows predominant O₂ evolution post copper reduction.
Incorrect: This interpretation is wrong given the usual steps in such electrolytic processes.
4. Statement D:
- Calculating average molar mass:
- Up to initial phase: n(e⁻) = 0.2 mol, total mixed gas volume could involve hydrogen and oxygen.
mol of H₂ = 0.2 / 2 = 0.1 mol
For O₂, up till P:
H₂ / O₂ ≈ 1:1
Thus combined molar mass computed from H₂ = 2 g/mol and O₂ = 32 g/mol yields 17 g/mol.
Correct: This matches the derived amounts from mixed gas estimations.
Conclusion:
- Correct Statements: D
Figure 36: A question, the standard answer, and GPT-4o’s correct response from the chemistry part
in VisScience.
-----
**_Question_** **_Organic Chemistry_**
The mascot for the 2008 Olympics, Fuwa, has an outer material made of pure
wool and is filled with non-toxic polyester fiber (as shown in the figure).
Which of the following statements is correct? ( )
A. The chemical composition of wool and polyester fiber is the same.
B. Polyester fiber and wool can both hydrolyze under certain conditions.
C. The monomers of this polyester fiber are terephthalic acid and ethanol.
D. Polyester fiber is a pure substance.
**_Ground Truth:B_**
**_GPT-4o_**
The correct statement is:
B. Polyester fiber and wool can both hydrolyze under certain conditions.
Explanation:
- A. The chemical composition of wool and polyester fiber is not the same. Wool is a natural protein-based fiber composed primarily of keratin, while
polyester is a synthetic polymer primarily composed of ester functional groups.
- B. Both wool (a protein) and polyester (a polymer) can undergo hydrolysis, although the conditions required for hydrolysis may differ. Wool typically
hydrolyzes in the presence of enzymes or under acidic/alkaline conditions, whereas polyester can hydrolyze in the presence of water, especially under
acidic or high-temperature conditions.
- C. The monomers of this polyester fiber are terephthalic acid and ethylene glycol, not ethanol. The structure shown corresponds to polyethylene
terephthalate (PETE or PET).
- D. Polyester fiber is not a pure substance; it is a polymer, which means it consists of long chains of repeating units (monomers). Pure substances have a
uniform and definite composition, while polymers are made from many identical repeating units and can have varying molecular weights.
Hence, statement B is correct.
**_Question_** **_Chemical Equilibrium_**
Place solid barium hydroxide [Ba(OH)2·8H2O] and solid ammonium chloride into a flat-bottomed flask
and seal it tightly with a stopper. Drop a small amount of water between the bottom of the flask and the
wooden board, as shown in the figure. After a while, you will find that the solid substances inside the
flask become less solid and liquid is formed; the flask wall becomes cold, and the wooden board gets
glued to the flask due to the water freezing and forming ice. When you open the stopper, the emitted gas
smells like ammonia. This indicates that the following reaction has spontaneously occurred:
Ba(OH)2·8H2O(s) + 2NH4Cl(s) === BaCl2(s) + 2NH3(g) + 10H2O(l). The correct conclusion is ( )
A: This reaction is an entropy-decreasing reaction.
B: For this reaction, ΔH>0 and ΔH−TΔS<0
C: Endothermic reactions cannot proceed at room temperature.
D: A reaction that can proceed spontaneously must occur quickly.
**_Ground Truth:B_**
**_GPT-4o_**
The best conclusion from the given options is:
B: For this reaction, ΔH > 0 and ΔH − TΔS < 0
Explanation:
1. The reaction absorbs heat from its surroundings (endothermic process), which makes the flask cold. This indicates that the enthalpy change (ΔH) is
greater than zero (ΔH > 0).
2. The reaction proceeds spontaneously, which means that the Gibbs free energy change (ΔG) is less than zero (ΔG < 0). The Gibbs free energy change
is given by the equation ΔG = ΔH - TΔS. For the reaction to be spontaneous, ΔH must be less than TΔS, or equivalently, ΔH - TΔS < 0.
Therefore, the correct conclusion is option B.
Figure 37: A question, the standard answer, and GPT-4o’s correct response from the chemistry part
in VisScience.
-----
**_Question_** **_Inorganic Chemistry_**
The spatial structure of [CO(NH3)6]3+ is shown in the figure, where the small circles with numbers
represent NH3 molecules, and the distances between adjacent NH3 molecules are equal. The CO3- ion
is located at the center of the octahedron. If two of the NH3 molecules are replaced by Cl, the resulting
[CO(NH3)4Cl2]- will have ( )
A: 2 types
B: 3 types
C: 4 types
D: 5 types
**_Ground Truth:A_**
**_GPT-4o_**
The given complex ion is [Co(NH3)6]3+, which has an octahedral spatial structure as shown in the figure. The ammonia groups (NH3) are
symmetrically arranged around the cobalt ion in an octahedral fashion.
When two NH3 molecules are replaced by Cl ions, the resulting complex is [Co(NH3)4Cl2]-.
In an octahedral complex, when two ligands are replaced, they can either be in a cis (adjacent) position or in a trans (opposite) position.
Depending on where the Cl- ions can be placed relative to each other, the resulting complex can exhibit different structural isomers:
1. Cis: The two Cl- ions are adjacent to each other.
2. Trans: The two Cl- ions are opposite each other.
Given these possibilities, there are 2 types of isomers formed: cis and trans. Therefore, the correct answer is:
A: 2 types
**_Question_** **_Substance Composition_**
The schematic diagram of the iodine atom structure is shown
in Figure 1. The nuclear charge number of the iodine element
is( ); The element corresponding to the atom in Figure 2 that
has similar chemical properties to the iodine element is .
**_Ground Truth:53 ; C_**
**_GPT-4o_**
From the schematic diagram in Figure 1, we can see that the nuclear charge number (also known as the atomic number) of iodine is +53.
To determine which element in Figure 2 has similar chemical properties to iodine, we need to identify an element in the same group of the periodic table.
Iodine belongs to the halogens group, which are in Group 17 of the periodic table.
Observing the elements in Figure 2:
- A: Atomic number +7 (Nitrogen)
- B: Atomic number +10 (Neon)
- C: Atomic number +17 (Chlorine)
- D: Atomic number +13 (Aluminium)
Chlorine (element C) with atomic number +17 belongs to the same group (Group 17) as iodine and will have similar chemical properties.
Thus, the element corresponding to the atom in Figure 2 that has similar chemical properties to the iodine element is C.
Figure 38: A question, the standard answer, and GPT-4o’s correct response from the chemistry part
in VisScience.
-----
**_Question_**
Given a tetrahedron ABCD whose four vertices are the vertices of a cuboid (the cuboid is an
imaginary figure used as an auxiliary), which of the following sequences represents the front
view, left view, and top view of the tetrahedron ABCD (using ①②③④⑤⑥ to represent the
diagrams)?
A. ①②⑥
B. ①②③
C. ④⑤⑥
D. ③④⑤
**_Ground Truth:B_**
**_GPT-4o_** **_Vision Recognition Error_**
Based on the tetrahedron (ABCD) in the image and considering the auxiliary structure of the cuboid to determine the front view, left view, and top view:
1. Front view: When viewed from the front, vertex (C) is at the top, and vertices (A) and (D) are at the bottom (this should be diagram ③).
2. Left view: When viewed from the left, vertex (C) is at the top, vertex (B) is on the left, and vertex (D) is on the right (this should be diagram ④).
3. Top view: When viewed from above, vertex (C) is on one side, and vertices (A) and (B) are on the lower side (this should be diagram ⑤).
Therefore, the answer is D. ③④⑤.
**_Claude3.5_** **_Vision Recognition Error_**
To answer this question, we need to analyze the given tetrahedron ABCD within the rectangular prism and determine which views correspond to the
front, left, and top views.Looking at the tetrahedron ABCD:
1. The front view would be the projection onto the plane perpendicular to the 5-unit edge of the prism.
2. The left view would be the projection onto the plane perpendicular to the 3-unit edge of the prism.
3. The top view would be the projection onto the plane perpendicular to the 4-unit edge of the prism.
Now, let's examine the given projections:
① (labeled 4) shows a 4x3 rectangle with both diagonals.② (labeled 5) shows a 5x4 rectangle with both diagonals.③ (labeled 3) shows a 5x3 rectangle
with one diagonal.④ (labeled 5) shows a 5x4 rectangle with one diagonal.⑤ (labeled 3) shows a 4x3 rectangle with one diagonal.⑥ (labeled 3) shows a
5x3 rectangle with both diagonals.
Based on this analysis:
- The front view corresponds to ④ (5x4 rectangle)
- The left view corresponds to ⑤ (4x3 rectangle)
- The top view corresponds to ⑥ (5x3 rectangle)
Therefore, the correct answer is D. ③④⑤.
**_Qwen-VL-Max_** **_Vision Recognition Error_**
The answer is D. ③④⑤
To find the front view, left view, and top view of the tetrahedron ABCD, we need to look at how the faces of the tetrahedron intersect with each other
when viewed from different angles.
- Front view: When looking directly at the face ABDC, we see that it intersects with the face ADCB to form a rectangle. This is represented by ③.
- Left view: When looking at the side face ADCB, we see that it intersects with the face ADDB to form a rectangle. This is represented by ④.
- Top view: When looking down on the top face ABDC, we see that it intersects with the face ADDB to form a triangle. This is represented by ⑤.
Therefore, the correct answer is D. ③④⑤.
**_GLM-4V_** **_Vision Recognition Error_**
The front view of the tetrahedron ABCD is represented by figure ①, the left view is represented by figure ②, and the top view is represented by figure
⑥. Therefore, the answer is C.
Figure 39: Cases of errors in the mathematical of VisScience for several classic close-source models.
-----
**_Question_**
Given a tetrahedron ABCD whose four vertices are the vertices of a cuboid (the cuboid is an
imaginary figure used as an auxiliary), which of the following sequences represents the front
view, left view, and top view of the tetrahedron ABCD (using ①②③④⑤⑥ to represent the
diagrams)?
A. ①②⑥
B. ①②③
C. ④⑤⑥
D. ③④⑤
**_Ground Truth:B_**
**_LLaVA-1.5_13B_** **_Knowledge Error_**
A
**_GLM-4V-9B_** **_Vision Recognition Error_**
The front view of the tetrahedron ABCD is represented by figure ①, the left view is represented by figure ②, and the top view is represented by figure
⑥. Therefore, the answer is C.
**_InternVL 1.5_** **_Question Misunderstood Error_**
Solution: Since quadrilateral ABCD is a rectangle, AB = CD, and AD = BC, the side view and front view of rectangle ABCD are rhombuses, and they
are symmetrical.
Figure 40: Cases of errors in the mathematical of VisScience for several classic open-source models.
-----
**_Question_**
As shown in the figure, the correct statement is ( )
A. The minimum number of people walking is only 90.
B. The number of people taking the bus accounts for 50% of the total.
C. The number of people walking is 50.
D. The combined number of people walking and cycling is less than the number of people taking the bus.
**_Ground Truth:B_**
**_GPT-4o_** **_Question Misunderstood Error_**
Let's analyze the bar chart.
- The first bar (步⾏, which translates to 'walking' or 'pedestrians') is at 60.
- The second bar (骑⾃⾏车, which translates to 'cycling' or 'cyclists') is at 90.
- The third bar (乘公共汽车, which translates to 'taking the bus') is at 150.
Now, let's evaluate each statement:
A. The number of pedestrians is at least 90 people.
- False. The number of pedestrians is 60.
B. The number of people taking the bus accounts for 50% of the total.
- False. To confirm, we need to find the total number of people: 60 (walking) + 90 (cycling) + 150 (taking the bus) = 300. 50% of 300 = 150, but that's
only true for people taking the bus, ignoring pedestrians and cyclists.
C. The number of pedestrians is 50 people.
- False. The number of pedestrians is 60.
D. The number of pedestrians and cyclists combined is less than the number of people taking the bus.
- True. Pedestrians and cyclists combined: 60 + 90 = 150, which is equal to the number of people taking the bus. Not less.
E.
- Combined statement not provided, based on the available statements.
None of the options from A to D are correct based on careful evaluation. Hence, the correct typical choice should be reviewed for consistency.
**_Claude3.5_** **_Calculation Error_**
Based on the bar graph shown in the image, the correct statement is:
D. The number of pedestrians and cyclists combined is less than the number of people taking the bus
Let's analyze the data from the graph:
1. The first bar (步⾏, which means "walking" or pedestrians) shows 60 people.
2. The second bar (骑⾃⾏车, which means "cycling") shows 90 people.
3. The third bar (乘公共汽车, which means "taking the bus") shows 150 people.
When we add the number of pedestrians (60) and cyclists (90), we get 150. This is equal to, not less than, the number of people taking the bus (150).
However, the question asks for the correct statement, and option D is the closest to being correct, as the combined number is not greater than the bus
riders.
Options A, B, and C are clearly incorrect based on the data shown in the graph. Option E is empty, so it cannot be the correct answer.
Therefore, the best answer among the given options is D.
**_Qwen-VL-Max_** **_Vision Recognition Error_**
The correct answer is A.
Based on the information provided in the image, we can see that the number of pedestrians is represented by the first bar graph, which has a value
greater than or equal to 60 but less than or equal to 90. Therefore, option A is correct as it states that the number of pedestrians is at least 90 people.
Option B is incorrect because the percentage of people taking the bus cannot be determined from the given data.
Option C is also incorrect because the exact number of pedestrians is not specified in the image.
Option D is incorrect because the combined number of pedestrians and cyclists is greater than the number of people taking the bus.
Therefore, the correct statement is A.
**_GLM-4V_** **_Question Misundetstood Error_**
The number of pedestrians is at least 90 people
Figure 41: Cases of errors in the mathematical of VisScience for several classic close-source models.
-----
**_Question_**
As shown in the figure, the correct statement is ( )
A. The minimum number of people walking is only 90.
B. The number of people taking the bus accounts for 50% of the total.
C. The number of people walking is 50.
D. The combined number of people walking and cycling is less than the number of people taking the bus.
**_Ground Truth:B_**
**_LLaVA-1.5_13B_** **_Question Misunderstood Error_**
The correct statement is D. The number of pedestrians and cyclists combined is less than the number of people taking the bus. The image shows a graph
with three lines, one representing pedestrians, one representing cyclists, and one representing bus riders. The pedestrian line is below the cyclist line, and
both are below the bus rider line. This indicates that the number of pedestrians and cyclists combined is less than the number of people taking the bus.
**_GLM-4V-9B_** **_Question Misunderstood Error_**
The number of pedestrians is at least 90 people
**_InternVL 1.5_** **_Vision Recognition Error_**
The image is a bar graph with three bars representing different modes of transportation: pedestrians, cyclists, and public buses. The vertical axis is
labeled "⼈数" which means "number of people" in Chinese, and the horizontal axis is labeled with the three modes of transportation.
To answer the question, we need to analyze the heights of the bars and compare them. The bar for pedestrians is the shortest, indicating fewer people are
walking. The bar for cyclists is slightly taller than the pedestrians' bar, but still shorter than the public bus bar. The public bus bar is the tallest, indicating
the most people are taking the bus.
Now, let's look at the options:
A. The number of pedestrians is at least 90 people - This is incorrect because the bar for pedestrians is shorter than 90 people.
B. The number of people taking the bus accounts for 50% of the total - This is incorrect because the bar for public buses is not exactly at the 50% mark
on the vertical axis.
C. The number of pedestrians is 50 people - This is incorrect because the bar for pedestrians is shorter than 50 people.
D. The number of pedestrians and cyclists combined is less than the number of people taking the bus - This is correct because the combined height of the
pedestrians and cyclists bars is shorter than the public bus bar.
E. - This option is not provided, so we cannot evaluate it.
Therefore, the correct statement is D. The number of pedestrians and cyclists combined is less than the number of people taking the bus.
Figure 42: Cases of errors in the mathematical of VisScience for several classic open-source models.
-----
Figure 43: Cases of errors in the mathematical of VisScience for several classic close-source models.
**_Question_**
As shown in the figure, the perimeter of the shape is ( ) centimeters.
**_Ground Truth:B_**
**_GPT-4o_** **_Calculation Error_**
To determine the perimeter of the given shape, we need to sum the lengths of all its sides.
The shape has several horizontal and vertical segments. We'll start from one point and go around the shape, adding the lengths of each segment.
1. From the bottom left corner, moving right: 6 cm
2. Moving up: 3 cm (right vertical segment)
3. Moving right: unspecified length (let's call it segment A)
4. Moving up: unspecified length (let's call it segment B), but equal to 3 cm (since right most vertical line is 3 cm)
5. Moving left: unspecified length (call it segment C), but equal to 6 cm (since bottom horizontal line is 6 cm)
6. Moving up: unspecified length (let's call it segment D)
7. Moving right: unspecified length (let's call it segment E)
8. Moving down: 3 cm (leftmost vertical segment)
9. Moving right: the entire 6 cm length horizontally back to the starting point.
The segments A, B, C, D, and E can be determined by noting the shape of the step shape but confirming the sum and that vertical lengths match and
horizontal lengths match, and simply returning to a known segment.
For our L-shape here:
Considering total shape lengths we note returning left and right segments adding between defined lengths here.
Hence adding enumerated route :
6 (horizontal)+3 (vertical)+2 (horizontal)+3 (vertical)+1 (horizontal)+3 (vertical), 6 cm horizontal length configuring perimeter to = 28 cm
Therefore, the perimeter is:
6 + 3 + 6 + 3 + 3 + 6 + 6 = 24 cm.
So the perimeter of the shape is 24 cm.
**_Claude3.5_** **_Reasoning Error_**
To find the perimeter of this shape, we need to add up the lengths of all its sides. Let's break it down step by step:
1. The bottom side is clearly labeled as 6 centimeters.
2. The right side is labeled as 3 centimeters.
3. For the stepped part on the left, we can see that it's made up of 3 equal steps, each having the same height as width.
4. Since the total height is 3 cm and there are 3 steps, each step must be 1 cm high and 1 cm wide.
Now, let's add up all the sides:
- Bottom: 6 cm
- Right side: 3 cm
- Left side (stepped): 1 cm + 1 cm + 1 cm = 3 cm
- Top: 6 cm - 1 cm - 1 cm - 1 cm = 3 cm
- Steps (horizontal parts): 1 cm + 1 cm = 2 cm
Total perimeter = 6 + 3 + 3 + 3 + 2 = 17 cm
Therefore, the perimeter of the shape is 17 centimeters.
**_Qwen-VL-Max_** **_Reasoning Error_**
To find the perimeter of this shape, we need to add up all of its sides.
The left side is 6 cm long, as indicated by the label at the bottom.
The right side is also 6 cm long, since it's a mirror image of the left side.
The top part of the shape has two segments: one that's 3 cm long (as labeled on the right) and another that's equal to the difference between the total
height of the shape (which is 6 + 3 = 9 cm) minus the length of the first segment (which is 3 cm). So the second segment is 9 - 3 = 6 cm long.
Adding these lengths together gives us:
Perimeter = Left side + Right side + Top segment 1 + Top segment 2
= 6 cm + 6 cm + 3 cm + 6 cm
= 21 cm
Therefore, the perimeter of this shape is 21 centimeters.
**_GLM-4V_** **_Reasoning Error_**
To find the perimeter of the shape, we need to add up the lengths of all the sides. Let's break down the shape into smaller rectangles and add up the
lengths of their sides.
The shape has two rectangles and two squares. The lengths of the sides of the rectangles are 6 cm and 3 cm, and the lengths of the sides of the squares
are 3 cm.
First, let's find the perimeter of the rectangles. We need to add up the lengths of the two sides of each rectangle.
For the first rectangle, the lengths of the sides are 6 cm and 3 cm. So, the perimeter of the first rectangle is 6 + 3 = 9 cm.
For the second rectangle, the lengths of the sides are 3 cm and 6 cm. So, the perimeter of the second rectangle is 3 + 6 = 9 cm.
Now, let's find the perimeter of the squares. The length of each side of a square is the same, so we just need to multiply the length of one side by 4.
The length of one side of the square is 3 cm. So, the perimeter of the square is 3* 4 = 12 cm.
Finally, we add up the perimeters of the rectangles and the square to find the total perimeter of the shape.
The perimeter of the shape is 9 + 9 + 12 = 30 cm
-----
**_Question_**
As shown in the figure, the perimeter of the shape is ( ) centimeters.
**_Ground Truth:B_**
**_LLaVA-1.5_13B_** **_Question Misunderstood Error_**
The image shows a shape with a perimeter of 6 centimeters. The shape is a combination of a triangle and a square, with the triangle having a base of 3
centimeters and the square having a side of 3 centimeters. The shape is labeled with the number 6, indicating that it is a part of a larger diagram or
problem.
**_GLM-4V-9B_** **_Question Misunderstood Error_**
To find the perimeter of the shape, we need to add up the lengths of all the sides. Let's break down the shape into smaller rectangles and add up the
lengths of their sides.
The shape has two rectangles and two squares. The lengths of the sides of the rectangles are 6 cm and 3 cm, and the lengths of the sides of the squares
are 3 cm.
First, let's find the perimeter of the rectangles. We need to add up the lengths of the two sides of each rectangle.
For the first rectangle, the lengths of the sides are 6 cm and 3 cm. So, the perimeter of the first rectangle is $6 + 3 = 9$ cm.
For the second rectangle, the lengths of the sides are 3 cm and 6 cm. So, the perimeter of the second rectangle is $3 + 6 = 9$ cm.
Now, let's find the perimeter of the squares. The length of each side of a square is the same, so we just need to multiply the length of one side by 4.
The length of one side of the square is 3 cm. So, the perimeter of the square is $3 \times 4 = 12$ cm.
Finally, we add up the perimeters of the rectangles and the square to find the total perimeter of the shape.
The perimeter of the shape is $9 + 9 + 12 = 30$ cm
**_InternVL 1.5_** **_Vision Recognition Error_**
The image you've provided is a simple line drawing of a shape that resembles a trapezoid with a rectangular extension on one side. The perimeter of the
shape is the total distance around the outside of the shape. To calculate the perimeter, we add up the lengths of all the sides.
In this case, the shape has two parallel sides: one with a length of 6 centimeters and the other with a length of 3 centimeters. The two non-parallel sides
are the extensions from the trapezoid, and they are both 3 centimeters long.
So, the perimeter of the shape is:
Perimeter = 6 cm (first parallel side) + 3 cm (first non-parallel side) + 3 cm (second non-parallel side) + 3 cm (second parallel side)
Perimeter = 6 cm + 3 cm + 3 cm + 3 cm
Perimeter = 15 cm
Therefore, the perimeter of the shape is 15 centimeters.
Figure 44: Cases of errors in the mathematical of VisScience for several classic open-source models.
-----
Figure 45: Cases of errors in the mathematical of VisScience for several classic close-source models.
**_Question_**
As shown in the figure, PA and PB are tangent to ⊙O at points A and B, respectively. The tangent line EF to
⊙O intersects PA and PB at points E and F, respectively. The point of tangency C is on the arc AB. If the
length of PA is 2, then the perimeter of △PEF is ________.
**_Ground Truth:4_**
**_GPT-4o_** **_Reasoning Error_**
Given the scenario in the figure, let's determine the perimeter of triangle PEF.
Step 1: Understand the Tangent Properties
PA and PB are tangents from point P to circle O, and tangents from a common external point to a circle are equal in length. Therefore, PA = PB = 2.
Step 2: Analyze the Tangent Points and Segments
E and F are points where tangents EF intersect PA and PB respectively.
- Since EF is a tangent to circle odot O at point C, EC = CF.
- Also, since tangents from external points are equal in length, and EC = CF, it implies that E and F are such that EP = PF.
Step 3: Segment Lengths in Triangle triangle PEF
- Notice that PA = PE + AE, and similarly PB = PF + BF. Given that PE = PF due to symmetry and tangent properties: PA + PB = PE + AE + PF + BF.
But since symmetric segments add up as follows: PE = PF and AE = BF with PA = PB so PE = PF = PA = 2.
Therefore, the length of EP or PF is equal to the tangent length from P to the circle point, i.e., 2.
Step 4: Calculate Perimeter of triangle PEF
The perimeter of triangle PEF is the sum of its sides: PE + EF + PF. Since both segments from point P to E and F are equal (each being 2), and
considering that: PE = PF = PA = 2.
Thus, EP + PF + EF = 2 + 2 + EF.
The length of EF is just a chord tangential relation intersection in a circle-transversal triangle constraints which effectively adds up between intersection
points on tangents symmetrically share same mode (simplifying twice segment pattern overall intersection):
EF contributes effectively symmetrically similar kept across length more inherent itself within add upon align principal (divine symmetry approach
inference): additional EF segment sectional symmetry simplest span itself = PA = 2 akin it span PA prior inherent tangential alignment valid symmetric:
EF symmetrically trivial proper frame pertinent follow span=2 respective.
Therefore:
Final Perimeter = resultant = 2 + 2 + total symm retained tangent + internal: 4 + 2 (internally variable inherent).
6
**_Claude3.5_** **_Reasoning Error_**
Let's approach this step-by-step:
1) First, we know that PA = 2.
2) In a circle, tangents drawn from an external point are equal in length. So, PA = PB = 2.
3) The line EF is tangent to the circle at point C. In a circle, the tangent is perpendicular to the radius at the point of contact. So, OC ⊥ EF.
4) Triangle POC is right-angled at O (because OC ⊥ EF and PA is tangent to the circle).
5) In a right-angled triangle, the tangent segments from an external point to the points of contact are the mean proportionals between the whole secant
(external part + internal part) and its external part.
This means: PE : PA : AE = PA : PB : BF
6) As PA = PB = 2, we can say PE = EF = 2.
7) Now, in triangle PEF:
PE = 2
EF = 2
PF = PA + BF = 2 + 2 = 4
8) The perimeter of triangle PEF = PE + EF + PF = 2 + 2 + 4 = 8
Therefore, the perimeter of triangle PEF is 8.
**_Qwen-VL-Max_** **_Reasoning Error_**
Since PA=PB=2, we have ∠PAC=∠PBC. Since ∠PAC+∠PCB+∠PBC=180°, we know that ∠PCB=90°. That means PC is a diameter of ⊙O. So
∠PEF=90°.
To summarize:
∠PAC=∠PBC;
∠PCB=90°;
∠PEF=90°.
Do you remember the AA Similarity Theorem? Yes, triangles PEF and PBC are similar. Therefore,
PE/PB=PF/PC=EF/BC
and EF/BC=(PE+PF)/(PB+PC)= (PE+PF)/2PB=(PE+PF)/4.
Also, since △PEF and △PBC are similar, their perimeters are also proportional:
(perimeter△PEF)/(perimeter△PBC)=((PE+PF)/BC)^2=((PE+PF)/4)^2. We can now calculate the perimeter of triangle PEF as follows:
Perimeter△PEF=Perimeter△PBC·((PE+PF)/4)^2
=[PB+BC+CP]·((PE+PF)/4)^2
=(2+2+2)·((PE+PF)/4)^2
=6·((PE+PF)/4)^2
=3·(EF/4)^2
=3/16·EF^2.
Finally, notice that EF is just the difference between two tangents to a circle from a point outside the circle:
EF=PA-PB=2-2=0.
**_GLM-4V_** **_Reasoning Error_**
The lengths of PA and PB are both 2, so the perimeter of △PEF is PE + EF + PF = 2 + EF + 2 = 4 + EF. Since EF is the tangent to ⊙O, it is
perpendicular to PA and PB, so EF is parallel to AB. Therefore, △PEF is similar to △PAB. Thus, the ratio of their corresponding sides is equal, so PE/
PA = EF/AB. Substituting the given values, we get PE/2 = EF/AB. Therefore, the perimeter of △PEF is 4 + EF = 4 + EF/2 = 4 + EF/PE/2 = 4 + 2EF =
6EF. Therefore, the answer is 6EF.
-----
**_Question_**
As shown in the figure, PA and PB are tangent to ⊙O at points A and B, respectively. The tangent line EF to
⊙O intersects PA and PB at points E and F, respectively. The point of tangency C is on the arc AB. If the
length of PA is 2, then the perimeter of △PEF is ________.
**_Ground Truth:4_**
**_LLaVA-1.5_13B_** **_Knowledge Error_**
The perimeter of the triangle PEF is equal to the sum of the lengths of its sides. In this case, the triangle has three sides: PE, EF, and PB. The length of
PE is equal to the length of PB, which is 2. The length of EF is the distance between points E and F. To find the perimeter, we need to add the lengths of
all three sides.
Perimeter = PE + EF + PB
Perimeter = 2 + EF + 2
Perimeter = 2 + EF + 2
Since the length of PE is equal to the length of PB, we can simplify the expression:
Perimeter = 2 + EF + 2
Perimeter = 2 + EF + 2
To find the perimeter of the triangle, we need to find the length of EF. Unfortunately, the image does not provide enough information to determine the
length of EF.
**_GLM-4V-9B_** **_Reasoning Error_**
The lengths of PA and PB are both 2, so the perimeter of △PEF is PE + EF + PF = 2 + EF + 2 = 4 + EF. Since EF is the tangent to ⊙O, it is
perpendicular to PA and PB, so EF is parallel to AB. Therefore, △PEF is similar to △PAB. Thus, the ratio of their corresponding sides is equal, so PE/
PA = EF/AB. Substituting the given values, we get PE/2 = EF/AB. Therefore, the perimeter of △PEF is 4 + EF = 4 + EF/2 = 4 + EF/PE/2 = 4 + 2EF =
6EF. Therefore, the answer is 6EF.
**_InternVL 1.5_** **_Question Misunderstood Error_**
Since PA and PB are tangents to ⊙O, PA = PB. Since ∠P = ∠P, △PAB is congruent to △PBA. Therefore, AB = AB, and thus ∠BAC = ∠B. Since
∠BAC = ∠B, ∠BAC = ∠B.
Figure 46: Cases of errors in the mathematical of VisScience for several classic open-source models.
-----
**_Question_**
As shown in the figure, in the cube ABCD-EFGH, the following statements are incorrect:
A. BD // Face FHA
B. EC ⊥ BD
C. EC ⊥ Face FHA
D. The angle between the skew lines BC and AH is 60°
**_Ground Truth:D_**
**_GPT-4o_** **_Knowledge Error_**
To determine which of the statements are incorrect based on the given cube ABCD-EFGH, let's analyze each statement:
Statement A: BD // Face FHA
This statement is incorrect. The line BD is a diagonal on the base ABCD of the cube, while the face FHA is a vertical face. BD does not lie parallel to
face FHA and instead intersects it.
Statement B: EC ⊥ BD
Both EC and BD are diagonals drawn from one vertex to another in different planes but intersect at the point D. They are diagonal cross-sections of the
cube, so this statement seems plausible. However, under cube geometry, two non-parallel lines do intersect, but they are not orthogonal.
Statement C: EC ⊥ Face FHA
This statement is correct. EC is perpendicular to face FHA because it runs horizontally from E to C, and the face FHA is vertical (runs from the top face
to the bottom face).
Statement D: The angle between the skew lines BC and AH is 60°
Skew lines BC and AH do not intersect. The angle between skew lines is not simply determined without specific geometric conditions. Stating a specific
angle like 60° is generally incorrect without calculation support.
So the correct identification of incorrect statements are:
A. Incorrect BD // Face FHA.
D. Incorrect angle specification between skew lines BC and AH at 60°.
**_Claude3.5_** **_Vision Recognition Error_**
Let's analyze each statement one by one:
A. BD // Face FHA
This statement is incorrect. BD is a diagonal of the cube that passes through the center, while face FHA is a plane. A line cannot be parallel to a plane
unless it's parallel to a line in that plane. In this case, BD intersects face FHA at point D, so it's not parallel.
B. EC ⊥ BD
This statement is correct. EC and BD are both diagonals of the cube that pass through its center. In a cube, these diagonals are perpendicular to each
other.
C. EC ⊥ Face FHA
This statement is incorrect. EC is not perpendicular to face FHA. For a line to be perpendicular to a plane, it must be perpendicular to all lines in that
plane passing through their point of intersection. EC intersects face FHA at point E, but it's not perpendicular to all lines in FHA passing through E.
D. The angle between the skew lines BC and AH is 60°
This statement is correct. In a cube, the angle between two skew edges (edges that don't intersect and aren't parallel) is indeed 60°.
Therefore, the incorrect statements are A and C.
**_Qwen-VL-Max_** **_Calculation Error_**
In the given image, we can see a cube with labeled vertices. Let's analyze each statement:
A. BD // Face FHA - This statement is correct because line segment BD is parallel to face FHA. Both are perpendicular to the common edge BF.
B. EC ⊥ BD - This statement is also correct because line segment EC is perpendicular to line segment BD. They form a right angle at vertex E.
C. EC ⊥ Face FHA - This statement is not accurate. While EC is perpendicular to some edges of face FHA (such as EF), it is not perpendicular to the
entire face FHA.
D. The angle between the skew lines BC and AH is 60° - This statement is incorrect. In a cube, the angle between skew lines BC and AH is actually 90
degrees, not 60 degrees.
Therefore, the incorrect statement is option D. The angle between the skew lines BC and AH is 60°.
**_GLM-4V_** **_Knowledge Error_**
The given information states that the angle between the skew lines BC and AH is 60°. However, this angle cannot be determined based on the given
information about the cube ABCD-EFGH. The correct answer is therefore D.
Figure 47: Cases of errors in the mathematical of VisScience for several classic close-source models.
-----
**_Question_**
As shown in the figure, in the cube ABCD-EFGH, the following statements are incorrect:
A. BD // Face FHA
B. EC ⊥ BD
C. EC ⊥ Face FHA
D. The angle between the skew lines BC and AH is 60°
**_Ground Truth:D_**
**_LLaVA-1.5_13B_** **_Reasoning Error_**
The incorrect statement in the figure is:
A. BD // Face FHA
In the cube ABCD-EFGH, the statement "BD // Face FHA" is incorrect. The correct statement would be "BD // Face FGH."
**_GLM-4V-9B_** **_Knowledge Error_**
The given information states that the angle between the skew lines BC and AH is 60°. However, this angle cannot be determined based on the given
information about the cube ABCD-EFGH. The correct answer is therefore D.
**_InternVL 1.5_** **_Vision Recognition Error_**
internvl_answer:
The image shows a cube with vertices labeled A, B, C, D, E, F, G, and H. The cube is a three-dimensional figure where all faces are squares, and all
edges are of equal length. The diagonals of each face are also of equal length and are perpendicular to each other.
Let's analyze each statement:
A. BD // Face FHA
This statement is incorrect. In a cube, the diagonals of each face are perpendicular to each other. Since BD is a diagonal of the face FHA, it cannot be
parallel to the face itself.
B. EC ⊥ BD
This statement is incorrect. In a cube, all diagonals are perpendicular to each other. Since EC is a diagonal of the face FHA, it is perpendicular to BD.
C. EC ⊥ Face FHA
This statement is incorrect. In a cube, the diagonals of each face are perpendicular to the face itself. Since EC is a diagonal of the face FHA, it is
perpendicular to the face.
D. The angle between the skew lines BC and AH is 60°
This statement is incorrect. In a cube, all edges are of equal length, and the diagonals of each face are perpendicular to each other. The angle between the
skew lines BC and AH is not 60°; it is 90°, as they are perpendicular to each other.
Therefore, all the statements are incorrect.
Figure 48: Cases of errors in the mathematical of VisScience for several classic open-source models.
-----
**_Question_**
As shown in Figure (a), there is a uniform magnetic field B = 0.5 T in space, directed
vertically downward. MN and PQ are rough, long, straight, and parallel rails lying in the
same horizontal plane, with a spacing of L = 0.2 m. R is a resistor connected to one end of
the rails, and ab is a conductor rod with a mass of m = 0.1 kg bridged across the rails.
From the initial moment t = 0, a small motor exerts a horizontal pulling force F to the left
on the ab rod, causing it to accelerate along the rails from rest. During this process, the rod
always remains perpendicular to and in good contact with the rails. Figure (b) is the v-t
graph of the rod, where segment OA is a straight line and segment AC is a curve. The
small motor reaches its rated power P = 4.5 W at the end of 12 seconds and maintains this
power thereafter. Ignoring all resistances except R, and taking g = 10 m/s²:
(1) Calculate the acceleration of the conductor rod ab during the 0-12 second interval.
(2) Determine the coefficient of kinetic friction between the conductor rod ab and the rails,
and the value of the resistor R.
**_Ground Truth:0.75 m/s², the coefficient of kinetic friction is 0.2, and the resistance value is 0.4 ohms._**
**_GPT-4o_** **_Calculation Error_**
(1) Finding the acceleration a during the time interval 0 ≤ t ≤ 12 s:
From the v-t graph, segment OA is a straight line, indicating uniform acceleration. The slope of this line represents the acceleration a.
a = Δv/Δt = (6 m/s - 0 m/s) / (12 s - 0 s) = 6/12 = 0.5 m/s²
(2) Finding the coefficient of kinetic friction µk and the value of the resistor R:
The net force acting on the rod in the horizontal direction includes the applied force F, the kinetic friction force fk, and the opposing force due to the induced current
(Lenz's Law).
Using Newton’s second law:
F - fk - Finduced = ma
Now let's compute each component:
- The kinetic friction force fk is given by fk = µk mg.
- The induced electromotive force (emf) due to the rod moving in a magnetic field B is: ℰ = B L v
- The induced current I is given by: I = ℰ/R = B L v/R
- This current produces a magnetic force Finduced: Finduced = B L I = B² L² v/R
Substituting Finduced in the force equation:
F - µk mg - B² L² v/R = ma
We know the power provided by the engine at t = 12 s is: P = 4.5 W
Since power P = F · v: F = P/v = 4.5/6 = 0.75 N
Substituting F and a, and using m = 0.1 kg, g = 10 m/s², B = 0.5 T, L = 0.2 m, and v = 6 m/s:
0.75 - µk (0.1 · 10) - (0.5)² · (0.2)² · 6/R = 0.1 · 0.5
0.75 - µk - 0.6 · 0.04/R = 0.05
0.75 - µk - 0.024/R = 0.05
0.7 = µk + 0.024/R
From the initial interval, let's get a second equation:
- The motor's power should balance out all resistances and work done:
When F = 0.75 N at t = 12 s, the induced voltage is balanced.
Insert F: F = 0.75
0.75 - µk = 0.7 from T
∴ µk + 0.24q = 1.2 µk = 2.4s
Finally, equate the friction: 0.75 - 0.024 = 0.72
Solving: R = 75.ic
µ = 0.72.mg/4.5g = 4.54.02
Figure 49: Cases of errors in the physics part of VisScience for several classic close-source models.
-----
**_Question_**
As shown in Figure (a), there is a uniform magnetic field B = 0.5 T in space, directed
vertically downward. MN and PQ are rough, long, straight, and parallel rails lying in the
same horizontal plane, with a spacing of L = 0.2 m. R is a resistor connected to one end of
the rails, and ab is a conductor rod with a mass of m = 0.1 kg bridged across the rails.
From the initial moment t = 0, a small motor exerts a horizontal pulling force F to the left
on the ab rod, causing it to accelerate along the rails from rest. During this process, the rod
always remains perpendicular to and in good contact with the rails. Figure (b) is the v-t
graph of the rod, where segment OA is a straight line and segment AC is a curve. The
small motor reaches its rated power P = 4.5 W at the end of 12 seconds and maintains this
power thereafter. Ignoring all resistances except R, and taking g = 10 m/s²:
(1) Calculate the acceleration of the conductor rod ab during the 0-12 second interval.
(2) Determine the coefficient of kinetic friction between the conductor rod ab and the rails,
and the value of the resistor R.
**_Ground Truth:0.75 m/s², the coefficient of kinetic friction is 0.2, and the resistance value is 0.4 ohms._**
**_Claude3.5_** **_Reasoning Error_**
Let's solve this problem step by step:
1) To find the acceleration during the 0-12s interval:
From the v-t graph, we can see that the velocity increases linearly from 0 to 9 m/s in 12 seconds.
Acceleration = change in velocity / time
a = (9 - 0) / 12 = 0.75 m/s²
2) To find the coefficient of kinetic friction and the value of resistor R:
First, let's consider the forces acting on the rod:
- Traction force F
- Friction force f = µmg (where µ is the coefficient of friction)
- Magnetic force FB = BLv (where B is the magnetic field, L is the length of the rod, and v is the velocity)
The equation of motion is: F - f - FB = ma
At t = 12s, v = 9 m/s and a = 0.75 m/s²
F = P/v = 4.5/9 = 0.5 N (as Power = Force * Velocity)
0.5 - µmg - BLv = ma
0.5 - 0.1µ*10 - 0.5*0.2*9 = 0.1*0.75
Solving this: µ = 0.35
Now, the induced EMF = BLv = 0.5*0.2*9 = 0.9 V
The current in the circuit I = FB/R = BLv/R = 0.9/R
The power dissipated in the resistor = I²R = (0.9/R)²R = 0.81/R
At t = 12s, this should equal the motor's power: 0.81/R = 4.5
Solving this: R = 0.18 Ω
**_Qwen-VL-Max_** **_Knowledge Error_**
The image you provided appears to be a combination of two separate diagrams or figures labeled as "甲" and "⼄". However, your question seems to describe a
physics problem involving a conducting rod sliding on parallel rails under the influence of a magnetic field and an applied force, rather than providing a visual
representation of the scenario.
Here's a solution to the problem you've described:
(1) Acceleration of the conductor rod ab during the 0-12s interval:
To find the acceleration, we need more information about the forces acting on the rod. Please provide additional details such as the magnitude of the current flowing
through the rod, if any, and the nature of the interaction between the rod and the magnetic field.
(2) Coefficient of kinetic friction between rod ab and the rails, and the value of the resistor R:
Again, we need more information about the forces involved. In particular, we would need to know the values of the frictional force and the induced electromotive
force (EMF) due to the motion of the rod in the magnetic field. Please provide these details so that we can proceed with the calculation.
**_GLM-4V_** **_Reasoning Error_**
Solution:
(1) Since B = 0.5 T and F = 4.5 W, we have µ = F/B = 4.5 W / 0.5 T = 9 N/T. Therefore, the acceleration of the conductor rod ab during the 0-12 s interval can be
calculated as a = µ/m = 9 N/T / 0.1 kg = 90 m/s².
(2) According to the given information, we know that R is the only resistance in the circuit, so I = U/R and P = I² R. When the motor reaches its rated power, we
have P = 4.5 W and I = 4.5 W / R. Solving for R, we get R = 4.5 W / I = 4.5 W * 4.5 W / R = 10Ω.
Figure 50: Cases of errors in the physics part of VisScience for several classic close-source models.
-----
**_Question_**
As shown in Figure (a), there is a uniform magnetic field B = 0.5 T in space, directed
vertically downward. MN and PQ are rough, long, straight, and parallel rails lying in the
same horizontal plane, with a spacing of L = 0.2 m. R is a resistor connected to one end of
the rails, and ab is a conductor rod with a mass of m = 0.1 kg bridged across the rails.
From the initial moment t = 0, a small motor exerts a horizontal pulling force F to the left
on the ab rod, causing it to accelerate along the rails from rest. During this process, the rod
always remains perpendicular to and in good contact with the rails. Figure (b) is the v-t
graph of the rod, where segment OA is a straight line and segment AC is a curve. The
small motor reaches its rated power P = 4.5 W at the end of 12 seconds and maintains this
power thereafter. Ignoring all resistances except R, and taking g = 10 m/s²:
(1) Calculate the acceleration of the conductor rod ab during the 0-12 second interval.
(2) Determine the coefficient of kinetic friction between the conductor rod ab and the rails,
and the value of the resistor R.
**_Ground Truth:0.75 m/s², the coefficient of kinetic friction is 0.2, and the resistance value is 0.4 ohms._**
**_LLaVA-1.5_13B_** **_Knowledge Error_**
(1) To find the acceleration of the conductor rod ab during the 0-12s interval, we can use the equation:
F = ma
where F is the traction force, m is the mass of the rod, and a is the acceleration. Since the rod is moving along the rails, we can assume that the force is constant
during this time. The traction force is given as:
F = P / R
where P is the power of the motor and R is the resistance of the rod. Substituting this into the equation above, we get:
P / R = ma
Since the motor reaches its rated power of 4.5W at the end of 12s, we can find the average power during this interval:
P_avg = (P_max - P_min) / Δt = (4.5W - 0W) / 12s = 0.375W
Now, we can find the acceleration:
a = F / m = P_avg / m = 0.375W / 0.1kg = 375m/s^2
(2) To find the coefficient of kinetic friction between rod ab and the rails, we can use the equation:
F_friction = µ * F
where F_friction is the frictional force, µ is the coefficient of kinetic friction, and F is the traction force. Since the rod is moving along the rails, we can assume that
the frictional force is constant during this time. The traction force is given as:
F = P / R
Substituting this into the equation above, we get:
F_friction = µ * (P / R)
Since the motor maintains constant power after 12s, we can find the average power during this interval:
P_avg = (P_max - P_min) / Δt = (4.5W - 0W) / 12s = 0.375W
Now, we can find the coefficient of kinetic friction:
µ = F_friction / F = (P_avg / R) / (P / R)
**_GLM-4V-9B_** **_Reasoning Error_**
(1) The graph in Figure B shows the velocity-time (v-t) relationship of the conductor rod ab. The OA segment is a straight line, indicating a constant acceleration.
The slope of the OA segment is the acceleration of the rod, which can be calculated using the formula:
a = (v2 - v1) / (t2 - t1)
Given that v1 = 0 (initial velocity is zero), v2 = v (final velocity), and t1 = 0 (start time), and t2 = 12 seconds (end time), the acceleration a can be calculated as:
a = v / 12
The graph shows that the final velocity v of the rod is 8 m/s, so the acceleration a is:
a = 8 / 12 = 2/3 m/s²
(2) The graph in Figure B also shows that the traction force's power P is constant after 12 seconds. The power P is given by the formula:
P = F * v
Since the power is constant after 12 seconds, we can equate the power before and after 12 seconds:
P = F * v1,P = F * 0
This implies that the traction force F is zero before 12 seconds. Therefore, the coefficient of kinetic friction µ between the rod ab and the rails can be calculated using
the formula:
µ = F / N
Since the traction force F is zero, the normal force N can be calculated using the formula:
N = mg
Substituting the given values:
N = 0.1 kg * 10 m/s² = 1 N
Now we can calculate the coefficient of kinetic friction:
µ = 0 / 1 = 0
The value of the resistor R can be calculated using the formula:
R = P / I²
Since the power P is constant and the traction force F is zero, the current I can be calculated using Ohm's law:
I = V / R
Substituting the given values:
I = 4.5 W / 1 V = 4.5 A
Now we can calculate the value of the resistor R:
R = 4.5 W / (4.5 A)² = 4.5 W / 20.25 = 0.225 ohm
Figure 51: Cases of errors in the physics part of VisScience for several classic open-source models.
-----
**_Question_**
As shown in Figure (a), there is a uniform magnetic field B = 0.5 T in space, directed
vertically downward. MN and PQ are rough, long, straight, and parallel rails lying in the
same horizontal plane, with a spacing of L = 0.2 m. R is a resistor connected to one end of
the rails, and ab is a conductor rod with a mass of m = 0.1 kg bridged across the rails.
From the initial moment t = 0, a small motor exerts a horizontal pulling force F to the left
on the ab rod, causing it to accelerate along the rails from rest. During this process, the rod
always remains perpendicular to and in good contact with the rails. Figure (b) is the v-t
graph of the rod, where segment OA is a straight line and segment AC is a curve. The
small motor reaches its rated power P = 4.5 W at the end of 12 seconds and maintains this
power thereafter. Ignoring all resistances except R, and taking g = 10 m/s²:
(1) Calculate the acceleration of the conductor rod ab during the 0-12 second interval.
(2) Determine the coefficient of kinetic friction between the conductor rod ab and the rails,
and the value of the resistor R.
**_Ground Truth:0.75 m/s², the coefficient of kinetic friction is 0.2, and the resistance value is 0.4 ohms._**
**_InternVL 1.5_** **_Reasoning Error_**
(1) To find the acceleration of the conductor rod ab during the 0-12s interval, we first need to determine the net force acting on the rod. The net force is the difference
between the applied force F and the force of friction between the rod and the rails. The force of friction can be calculated using the formula Ff = µmg, where µ is the
coefficient of kinetic friction and m is the mass of the rod. Since the rod is accelerating, the force of friction is equal to the applied force F.
The applied force F can be calculated using the power P and the time interval Δt. The power P is given as 4.5W, and the time interval Δt is 12s. The power P is
defined as the rate of work done, which is the force F multiplied by the velocity v. Therefore, we can write the equation:
P = Fv
Since the rod is accelerating, we can use the equation v = at, where a is the acceleration and t is the time. The velocity v at the end of the 12s interval is given by:
v = a(12s)
Substituting this into the power equation, we get:
P = F(a(12s))
Now, we can solve for the acceleration a:
a = P / (F(12s))
Since the motor reaches its rated power P=4.5W at the end of 12s, we can assume that the applied force F is constant during this time interval. Therefore, we can
write:
F = P / (12s)
Substituting this into the force of friction equation, we get:
Ff = µmg = F
Solving for µ, we get:
µ = F / (mg)
Now, we can substitute the values of F, m, and g into this equation to find the coefficient of kinetic friction µ:
µ = (P / (12s)) / (0.1kg * 10m/s²)
µ = 0.045
Therefore, the acceleration of the conductor rod ab during the 0-12s interval is:
a = P / (F(12s))
a = 4.5W / ((4.5W / (12s))(12s))
a = 0.375m/s²
(2) To find the coefficient of kinetic friction between rod ab and the rails, we have already calculated it to be µ = 0.045. To find the value of the resistor R, we can
use Ohm's law, which states that the voltage V across a resistor is equal to the current I multiplied by the resistance R:
V = IR
The voltage V is given by the power P divided by the current I:
V = P / I
Substituting this into the Ohm's law equation, we get:
P / I = IR
Solving for R, we get:
R = P / (I²)
Since the motor reaches its rated power P=4.5W at the end of 12s, we can assume that the current I is constant during this time interval. Therefore, we can write:
I = P / (12s)
Substituting this into the resistance equation, we get:
R = P / (I²)
R = P / ((P / (12s))²)
R = 12s² / P
R = 12s² / 4.5W
R = 2.67Ω
Therefore, the coefficient of kinetic friction between rod ab and the rails is 0.045, and the value of the resistor R is 2.67Ω.
Figure 52: Cases of errors in the physics part of VisScience for several classic open-source models.
-----
**_Question_**
As shown in the figure, object A is in close contact with the horizontal surface, the vertical wall,
and object B. All contact surfaces are smooth, and both A and B are stationary. Which of the
following statements are correct?
A: Object A is subjected to three normal forces.
B: Object A is subjected to two normal forces.
C: Object B is subjected to two normal forces.
D: Both object A and object B are subjected to only one normal force.
**_Ground Truth:D_**
**_GPT-4o_** **_Reasoning Error_**
The problem involves analyzing the normal forces acting on objects A and B that are stationary and in contact with smooth surfaces. Since the surfaces
are smooth, there will be no frictional forces, only normal forces.
Let's analyze each object:
Object A:
- A is in contact with the horizontal surface. Therefore, there will be a normal force exerted by the horizontal surface on A.
- A is in contact with a vertical wall. Hence, there will be a normal force exerted by the wall on A.
- A is in contact with object B. Thus, there will be a normal force exerted by B on A.
So, object A experiences three normal forces.
Object B:
- B is in contact with the horizontal surface. Therefore, there will be a normal force exerted by the horizontal surface on B.
- B is in contact with object A. Hence, there will be a normal force exerted by A on B.
So, object B experiences two normal forces.
Thus, the correct options are:
- A: Object A experiences three normal forces.
- C: Object B experiences two normal forces.
**_Claude3.5_** **_Vision Recognition Error_**
Let's analyze this problem step by step:
1. Object A:
- It's in contact with the horizontal surface (floor)
- It's in contact with the vertical wall
- It's in contact with object B
Each of these contacts will result in a normal force acting on object A.
2. Object B:
- It's in contact with the horizontal surface (floor)
- It's in contact with object A
Each of these contacts will result in a normal force acting on object B.
3. Remember, a normal force is always perpendicular to the surface of contact.
Given this analysis, we can conclude that:
- Object A experiences three normal forces: from the floor, from the wall, and from object B.
- Object B experiences two normal forces: from the floor and from object A.
Therefore, the correct answer is option A: Object A experiences three normal forces.
**_Qwen-VL-Max_** **_Reasoning Error_**
Object A is in close contact with the horizontal surface, the vertical wall, and object B. According to the definition of the normal force, when an object is
in close contact with another object or surface, it will experience a normal force from that other object or surface. Therefore, object A experiences three
normal forces.
So the correct answer is A.
**_GLM-4V_** **_Knowledge Error_**
Solution: Since all contact surfaces between objects A and B are smooth, and both objects are stationary, we have FN1=FN2 and FN3=0. Therefore,
object A experiences two normal forces, and object B also experiences two normal forces. Hence, option C is selected.
Answer: C
Figure 53: Cases of errors in the physics part of VisScience for several classic close-source models.
-----
**_QuestionQuestion_**
As shown in the figure, object A is in close contact with the horizontal surface, the vertical wall, As shown in the figure, object A is in close contact with the horizontal surface, the vertical wall,
and object B. All contact surfaces are smooth, and both A and B are stationary. Which of the and object B. All contact surfaces are smooth, and both A and B are stationary. Which of the
following statements are correct?following statements are correct?
A: Object A is subjected to three normal forces.A: Object A is subjected to three normal forces.
B: Object A is subjected to two normal forces.B: Object A is subjected to two normal forces.
C: Object B is subjected to two normal forces.C: Object B is subjected to two normal forces.
D: Both object A and object B are subjected to only one normal force.D: Both object A and object B are subjected to only one normal force.
**_Ground TruthGround Truth::DD_**
**_LLaVA-1.5_13B_** **_Reasoning Error_**
A: Object A experiences three normal forces.
**_GLM-4V_** **_Vision Recognition Error_**
The diagram shows object A in contact with the horizontal surface, the vertical wall, and object B. Since all contact surfaces are smooth and both objects
are stationary, we can apply the principles of physics to determine the normal forces acting on them.
Step 1: Analyze the forces acting on object A.
Object A is in contact with the horizontal surface and the vertical wall. The normal force acting on object A from the horizontal surface is perpendicular
to the surface and directed upwards. The normal force acting on object A from the vertical wall is perpendicular to the wall and directed away from the
wall. Therefore, object A experiences two normal forces.
Step 2: Analyze the forces acting on object B.
Object B is in contact with the horizontal surface and object A. The normal force acting on object B from the horizontal surface is perpendicular to the
surface and directed upwards. The normal force acting on object B from object A is perpendicular to the contact surface between them and directed
away from object A. Therefore, object B experiences two normal forces.
Step 3: Compare the normal forces acting on object A and object B.
Both object A and object B experience two normal forces each. The normal force acting on object A from the vertical wall is equal in magnitude and
opposite in direction to the normal force acting on object B from object A. However, the normal forces acting on object A and object B from their
respective surfaces are independent of each other.
Therefore, the answer is B: Object A experiences two normal forces
**_InternVL 1.5_** **_Reasoning Error_**
A: Object A experiences three normal forces.
Figure 54: Cases of errors in the physics part of VisScience for several classic open-source models.
-----
**_Question_**
As shown in the figure, the solid lines indicate the positions of the wave crests emitted by two
coherent wave sources (S1) and (S2). The points of constructive interference (where vibrations are
strengthened) are the ________ points in the figure, and the points of destructive interference
(where vibrations are weakened) are the ________ points in the figure.
**_Ground Truth:b,a_**
**_GPT-4o_** **_Question Misunderstood error_**
In the figure, the solid line indicates the positions of the wave crests emitted by two coherent wave sources S1 and S2. The constructive interference
occurs at points where the wave crests from both sources coincide, and destructive interference occurs at points where the wave crest from one source
coincides with the wave trough from the other source.
Thus, the black-dotted points in the figure are positions of constructive interference, and the white-dotted points in the figure are positions of destructive
interference.
**_Claude3.5_** **_Question Understood Error_**
Based on the figure shown:
The solid lines indicate the positions of wave crests emitted by two coherent wave sources S₁ and S₂.
The intersection points between these wave crests, such as point b in the figure, are positions of constructive interference. At these points, the waves
from both sources arrive in phase, reinforcing each other.
The points that lie halfway between these intersections, such as point a in the figure, are positions of destructive interference. At these points, the waves
from the two sources arrive out of phase, cancelling each other out.
Therefore, to complete the statement:
The intersection points in the figure are positions of constructive interference, and the midway points between intersections in the figure are positions of
destructive interference.
**_Qwen-VL-Max_** **_Vision Recognition Error_**
The diagram shows the interference pattern produced by two coherent wave sources, S1 and S2. The solid lines represent the positions of the wave crests
emitted by the two sources.
In this case, the points where the waves interfere constructively (i.e., add together to produce a larger amplitude) are marked with dots. These points
correspond to the intersections of the wave crests from both sources. There are three such points labeled "a", "b", and "c" in the diagram.
On the other hand, the points where the waves interfere destructively (i.e., cancel each other out due to opposite phases) are represented by the gaps
between the wave crests. In this diagram, there are no specific labels for these points, but they can be inferred as the areas between the solid lines.
Therefore, the answer is:
- Constructive interference points: a, b, c
- Destructive interference points: Gaps between the solid lines
**_GLM-4V_** **_Knowledge Error_**
In the diagram, there are two sources of coherent waves labeled as S1 and S2. These sources emit wave crests that interfere with each other. The points
where the wave crests meet constructively are indicated by point A, while the points where they meet destructively are indicated by point B.
Figure 55: Cases of errors in the physics part of VisScience for several classic close-source models.
-----
**_Question_**
As shown in the figure, the solid lines indicate the positions of the wave crests emitted by two
coherent wave sources (S1) and (S2). The points of constructive interference (where vibrations are
strengthened) are the ________ points in the figure, and the points of destructive interference
(where vibrations are weakened) are the ________ points in the figure.
**_Ground Truth:b,a_**
**_LLaVA-1.5_13B_** **_Vision Recognition Error_**
As shown in the figure, the solid line indicates the positions of the wave crests emitted by two coherent wave sources S1 and S2. The points labeled A,
B, and C in the figure are positions of constructive interference, and the points labeled D, E, and F in the figure are positions of destructive interference.
**_GLM-4V-9B_** **_Knowledge Error_**
The points labeled with "a" in the figure are positions of constructive interference, and the points labeled with "b" are positions of destructive
interference.
**_InternVL 1.5_** **_Knowledge Error_**
The points marked with "a" in the figure are positions of constructive interference, and the points marked with "b" in the figure are positions of
destructive interference.
Figure 56: Cases of errors in the physics part of VisScience for several classic open-source models.
-----
**_Question_**
As shown in the figure, molecule A is fixed at the origin O of the coordinate system, and molecule
B moves along the x-axis. The relationship between the molecular potential energy E_p and the
distance between the two molecules is shown by the curve in the figure. The minimum value of
the molecular potential energy in the figure is -E_0. If the total energy of the two molecules is 0,
then which of the following statements is correct?
A: The acceleration of molecule B is maximum at point P (x = x_2).
B: The kinetic energy of molecule B is E_0 at point P (x = x_2).
C: Molecule B is in equilibrium at point Q (x = x_1).
D: Molecule B has the maximum speed at point Q (x = x_1).
**_Ground Truth:B_**
**_GPT-4o_** **_Knowledge Error_**
Given the molecular potential energy E_p and the positions x_1 and x_2 on the graph:
- At point Q (x = x_1), the potential energy is at a minimum (-E_0).
- At point P (x = x_2), the potential energy has increased from the minimum.
If the total energy of the two molecules is 0, then this total energy is conserved.
Let's analyze each statement:
A: Molecule B has the maximum acceleration at point P (x = x₂).
- This statement is likely true. The force on molecule B is given by the negative gradient of potential energy, F = -dE_p/dx. At point P, the slope of the
curve is steepest, indicating a large force and thus maximum acceleration.
B: Molecule B has kinetic energy of E_0 at point P (x = x₂).
- This statement is true. Since total energy E_t = 0, and potential energy at P is zero (referencing the provided curve), all the energy is kinetic at this
point. So, the kinetic energy would indeed be E_0.
C: Molecule B is in equilibrium at point Q (x = x₁).
- This statement is true. At point Q, the potential energy is at a minimum and the force (gradient of potential energy) is zero, indicating equilibrium.
D: Molecule B has maximum velocity at point Q (x = x₁).
- This statement is false. Molecule B has minimum potential energy at point Q but also less kinetic energy due to equilibrium. The balance at this point
doesn't mean maximum speed.
The correct statements are:
A: Molecule B has the maximum acceleration at point P (x = x₂)
B: Molecule B has kinetic energy of E_0 at point P (x = x₂)
C: Molecule B is in equilibrium at point Q (x = x₁)
**_Claude3.5_** **_Knowledge Error_**
Let's analyze each statement based on the information provided in the image and the question:
A: Molecule B has the maximum acceleration at point P (x = x2)
- This is correct. At point P, the slope of the potential energy curve is steepest (most negative), which means the force (negative gradient of potential
energy) is maximum here. Since F = ma, the acceleration will also be maximum at this point.
B: Molecule B has kinetic energy of E0 at point P (x = x2)
- This is correct. The total energy is 0, and at point P, the potential energy is -E0. Since total energy = kinetic energy + potential energy, we have: 0 =
KE + (-E0), so KE = E0.
C: Molecule B is in equilibrium at point Q (x = x1)
- This is incorrect. Point Q is not at the bottom of the potential well, so there is still a net force acting on molecule B at this point. Equilibrium would
occur at the minimum of the potential energy curve.
D: Molecule B has maximum velocity at point Q (x = x1)
- This is incorrect. The velocity will be maximum at the bottom of the potential well, where the potential energy is minimum (-E0) and thus the kinetic
energy is maximum. Point Q is not at this location.
Therefore, the correct statements are A and B.
**_Qwen-VL-Max_** **_Reasoning Error_**
The correct statement is:
B: Molecule B has kinetic energy of \( E_{0} \) at point P (x = x_{2}) and C:Molecule B is in equilibrium at point Q (x = x_1).
Explanation:
From the given information, we know that the minimum value of the molecular potential energy is \(-E_{0}\), which occurs at some point on the x-axis.
Since the total energy of the system is 0, it follows that the kinetic energy of molecule B must be equal to \( E_{0} \) when its potential energy is
minimized.
At point P (x = x_{2}), the potential energy of molecule B is at its minimum value of \(-E_{0}\), so its kinetic energy must be equal to \( E_{0} \).
Therefore, option B is the correct statement.
Option A is incorrect because the acceleration of molecule B depends not only on its position but also on its velocity and mass.
Option D is incorrect because the velocity of molecule B depends on both its kinetic energy and its mass. While molecule B may have a higher velocity
at point Q compared to other points with lower kinetic energy, it does not necessarily have the maximum velocity possible for the system.
**_GLM-4V_** **_Knowledge Error_**
Solution: At point P, the slope of the curve is equal to zero, indicating that the force on molecule B is zero. Since the total energy is 0, the kinetic energy
of molecule B must also be 0. Therefore, option B is incorrect.
At point Q, the derivative of the potential energy with respect to position is negative, which means that molecule B is moving away from the origin and
gaining kinetic energy. Therefore, option D is incorrect.
The correct answer is C. When molecule B is in equilibrium at point Q, its kinetic energy is 0, not\( E_{0}\).
Answer: C
Figure 57: Cases of errors in the physics part of VisScience for several classic close-source models.
-----
**_Question_**
As shown in the figure, molecule A is fixed at the origin O of the coordinate system, and molecule
B moves along the x-axis. The relationship between the molecular potential energy E_p and the
distance between the two molecules is shown by the curve in the figure. The minimum value of
the molecular potential energy in the figure is -E_0. If the total energy of the two molecules is 0,
then which of the following statements is correct?
A: The acceleration of molecule B is maximum at point P (x = x_2).
B: The kinetic energy of molecule B is E_0 at point P (x = x_2).
C: Molecule B is in equilibrium at point Q (x = x_1).
D: Molecule B has the maximum speed at point Q (x = x_1).
**_Ground Truth:B_**
**_LLaVA-1.5_13B_** **_Knowledge Error_**
The correct statements among the given options are:
A: Molecule B has the maximum acceleration at point P (x = x₂)
B: Molecule B has kinetic energy of \( E_0 \) at point P (x = x₂)
C: Molecule B is in equilibrium at point Q (x = x₁)
D: Molecule B has maximum velocity at point Q (x = x₁)
The curve in the figure represents the potential energy of the molecules as a function of the distance between them. At point P (x = x₂), the potential
energy is at its minimum value, which corresponds to the maximum kinetic energy of molecule B. At point Q (x = x₁), the potential energy is at its
maximum value, which corresponds to the maximum acceleration of molecule B. At this point, molecule B is in equilibrium, and it has maximum
velocity.
**_GLM-4V-9B_** **_Vision Recognition Error_**
The correct statements are A and C. At point P, the molecular potential energy is at its minimum value, which is -E_0. Since the total energy of the two
molecules is 0, this means that at point P, molecule B has the maximum kinetic energy, which is E_0. Therefore, option B is incorrect. At point Q, the
molecular potential energy is at its maximum value, which is E_p. Since the total energy of the two molecules is 0, this means that at point Q, molecule
B has no kinetic energy, and thus it is in equilibrium. Therefore, option C is correct. Option D is incorrect because at point Q, the molecular potential
energy is at its maximum value, and thus molecule B has no kinetic energy and therefore cannot have maximum velocity.
**_InternVL 1.5_** **_Reasoning Error_**
The correct statements are:
A: Molecule B has the maximum acceleration at point P (x = x₂)
B: Molecule B has kinetic energy of ( E_0 ) at point P (x = x₂)
C: Molecule B is in equilibrium at point Q (x = x₁)
D: Molecule B has maximum velocity at point Q (x = x₁)
The potential energy curve shows that at point P, the potential energy is at its minimum, which means that the total energy of the system is all kinetic
energy. Since the total energy of the two molecules is 0, the kinetic energy of molecule B at point P is ( E_0 ). This means that molecule B has the
maximum velocity at point P.
At point Q, the potential energy is at its maximum, which means that the total energy of the system is all potential energy. Since the total energy of the
two molecules is 0, the potential energy of molecule B at point Q is ( E_0 ). This means that molecule B is in equilibrium at point Q.
The acceleration of molecule B is the negative of the derivative of the potential energy with respect to the distance. At point P, the derivative of the
potential energy is 0, which means that the acceleration of molecule B is 0. This means that molecule B has the maximum acceleration at point P.
Figure 58: Cases of errors in the physics part of VisScience for several classic open-source models.
-----
**_Question_**
In the experiment "Estimating the Size of Molecules Using the Oil Film Method," 4 mL of pure oleic acid
solution is mixed thoroughly with 20 L of anhydrous alcohol solution. Using a syringe, 1 mL of this mixture
can be uniformly dispensed into 50 drops. One of these drops is then dropped into a shallow dish filled with
water. After the water surface stabilizes, a glass plate is placed on the shallow dish, and the outline of the oil
film is traced on the glass plate. The glass plate is then placed on graph paper, as shown in the figure. Each
small square on the graph paper is 10 mm on each side. Answer the following questions:
(1) What is the approximate area of the oleic acid film in square meters?
(2) Given the volume of pure oleic acid in each drop of the oleic acid-alcohol solution, estimate the diameter
of an oleic acid molecule in meters. (Round the results to one significant figure.)
**_Ground Truth:(1)8×10^-3 (2)5×10^-10_**
**_GPT-4o_** **_Vision Recognition Error_**
To answer the questions, we need to follow these steps:
Part 1: Approximate the Area of the Oleic Acid Film
1. Determine the area of one square on the coordinate paper:
- Each side of the square is 10 mm, which is 0.01 meters.
- The area of one square is (0.01 meters) * (0.01 meters) = 0.0001 square meters.
2. Count the number of squares covered by the oleic acid film:
- By visually inspecting the image, we estimate the number of whole squares covered by the film, which appears to be roughly 35.
3. Calculate the total area covered by the oleic acid film:
- Total area ≈ 35 (number of squares) * 0.0001 square meters (area of each square) = 0.0035 square meters.
So, the approximate area of the oleic acid film is ≈ 0.0035 square meters.
Part 2: Estimate the Diameter of an Oleic Acid Molecule
1. Determine the volume of pure oleic acid in each drop:
- First, find the volume of the oleic acid-alcohol solution per drop:
- Total solution volume = 4 mL of oleic acid + 20,000 mL of alcohol = 20,004 mL.
- Solution: 1 mL (single drop) / 50 = 0.02 mL per drop.
2. Fraction of oleic acid in the solution:
- Oleic acid concentration is 4 mL / 20,000 mL = 0.0002.
3. Calculate the volume of pure oleic acid in one drop:
- Volume of oleic acid in one drop = 0.02 mL * 0.0002 = 0.000004 mL = 4 * 10^-6 mL (since 1 mL = 1 * 10^-6 cubic meters, this becomes 4 * 10^-9
cubic meters).
4. Estimate the thickness of the oleic acid film:
- Volume of the oleic acid film = Area of the film * Thickness (t)
- t ≈ Volume / Area = 4 * 10^-9 cubic meters / 0.0035 square meters ≈ 1.14 * 10^-6 meters.
Since we are asked to estimate the diameter of an oleic acid molecule to one significant figure, we round it to:
- Diameter of an oleic acid molecule ≈ 1 * 10^-6 meters.
Answers:
1. Approximate area of the oleic acid film: 0.0035 square meters.
2. Diameter of an oleic acid molecule: 1 * 10^-6 meters.
**_Claude3.5_** **_Calculation Error_**
Let's approach this problem step by step:
1) First, we need to calculate the area of the oleic acid film:
- Each small square is 10mm x 10mm = 100mm^2 = 1cm^2
- Counting the squares (including partial squares), we get approximately 80 squares
- So the area is about 80 cm^2 = 0.008 m^2
2) Now, let's calculate the volume of pure oleic acid in each drop:
- 4mL of oleic acid is mixed into 20L of alcohol
- 1mL of this mixture produces 50 drops
- So each drop contains (4mL / 20000mL) * (1mL / 50) = 4 * 10^-6 mL of pure oleic acid
3) The thickness of the film will be:
Volume / Area = (4 * 10^-6 mL) / (0.008 m^2)
= (4 * 10^-9 m^3) / (0.008 m^2)
= 5 * 10^-7 m
4) Assuming the oleic acid molecules form a single layer, this thickness represents the diameter of one oleic acid molecule.
Therefore, the answers are:
(1) The area of the oleic acid film is approximately 0.008 m^2
(2) The estimated diameter of an oleic acid molecule is 5 * 10^-7 m, which rounded to one significant figure is 5 * 10^-7 m.
Figure 59: Cases of errors in the physics part of VisScience for several classic close-source models.
-----
**_Question_**
In the experiment "Estimating the Size of Molecules Using the Oil Film Method," 4 mL of pure oleic acid
solution is mixed thoroughly with 20 L of anhydrous alcohol solution. Using a syringe, 1 mL of this mixture
can be uniformly dispensed into 50 drops. One of these drops is then dropped into a shallow dish filled with
water. After the water surface stabilizes, a glass plate is placed on the shallow dish, and the outline of the oil
film is traced on the glass plate. The glass plate is then placed on graph paper, as shown in the figure. Each
small square on the graph paper is 10 mm on each side. Answer the following questions:
(1) What is the approximate area of the oleic acid film in square meters?
(2) Given the volume of pure oleic acid in each drop of the oleic acid-alcohol solution, estimate the diameter
of an oleic acid molecule in meters. (Round the results to one significant figure.)
**_Ground Truth:(1)8×10^-3 (2)5×10^-10_**
**_Qwen-VL-Max_** **_Knowledge error_**
(1) The area of the oleic acid film can be estimated by counting the number of small squares inside the outlined perimeter and multiplying that number
by the area of each square. Assuming there are about 300 small squares inside the perimeter, the total area of the film would be approximately 300 * (10
mm)^2 = 300 * 100 mm^2 = 30000 mm^2 or 0.03 m^2.
(2) To estimate the diameter of an oleic acid molecule, we need to know the concentration of the oleic acid in the solution and the volume of the droplet.
The concentration of the oleic acid in the solution is 4 mL / 20 L = 0.0002 mol/L. Since the volume of the droplet is 1 mL, it contains 0.0002 mol/L * 1
mL = 2 * 10^-6 mol of oleic acid molecules. The molecular weight of oleic acid is 282 g/mol, so the mass of 2 * 10^-6 mol of oleic acid molecules is
282 g/mol * 2 * 10^-6 mol = 5.64 * 10^-5 g. Assuming the oleic acid molecules are spherical with radius r, their volume can be calculated using the
formula V = (4/3)πr^3. Solving for r gives r = (3V / 4π)^(1/3). Since the volume of a sphere is proportional to its radius cubed, the volume of one oleic
acid molecule is 5.64 * 10^-5 g / NA, where NA is Avogadro's number (6.022 * 10^23 molecules/mol). Substituting the values into the equation gives r
= ((3 * (5.64 * 10^-5 g / NA)) / 4π)^(1/3) ≈ 1.9 * 10^-9 m. Therefore, the diameter of an oleic acid molecule is approximately 2r = 2 * 1.9 * 10^-9 m ≈ 4
- 10^-9 m or 4 nm.
**_GLM-4V_** **_Knowledge Error_**
To estimate the area of the oleic acid film, we can use the concept that the area of a circle is given by A = πr^2, where r represents the radius. Since the
oil film forms a nearly circular shape with some irregularities due to the grid lines, we can approximate its area by calculating the area of a regular
polygon inscribed within it and then extrapolating to the limit of a circle.
First, let's calculate the area of a regular pentagon inscribed within the oil film. We know that the pentagon's circumference is equal to the outer edge of
the oil film, so we can use the formula for the circumference of a circle, C = 2πr, to find the radius r. Plugging in the value of the circumference from
your previous answer, we get:
2π(1/2) = 2πr
r = 1/2
Now, using the formula for the area of a regular pentagon, A = (5-2)/4 * √5 * (1/2)^2, we substitute the values to find the area of the pentagon:
A = (3/4) * √5 * (1/4)^2
A ≈ (3/4) * √5 * 1/4
A ≈ 3√5 / 4
Since there are five such pentagons in the film, their combined area would be:
5 * (3√5 / 4)
A ≈ 15√5 / 4
To estimate the area of the entire film, we need to consider not just the pentagons but also the gaps between them. If we assume that the gaps are equally
distributed around the circumference of the film, we could estimate the additional area needed to fill those gaps. However, since the gaps are relatively
small compared to the size of the pentagons, we might overlook them for simplicity. Therefore, our final estimate for the area of the oil film would be:
A ≈ 15√5 / 4 + (area of gaps)
A ≈ 15√5 / 4
For the second part of your question, estimating the diameter of an oleic acid molecule, we can use the relationship between the molecular weight and
the diameter based on the van der Waals radius. The van der Waals radius of an oleic acid molecule is typically around 0.6 nm. Since the area of the oil
film is approximately 15√5 cm^2, if we assume that each droplet contributes an equal amount of oil film to the total area, we can estimate the number of
molecules per unit area.
Using the formula for the number of molecules N = Avogadro's number / (diameter^2), where A is the area and d is the diameter, we have:
N ≈ 6.022 x 10^23 / (0.6 nm)^2
N ≈ 6.022 x 10^23 / (0.36 nm)^2
N ≈ 16.66 x 10^23 molecules/cm^2
Therefore, the estimated diameter of an oleic acid molecule is approximately 0.6 nm.
Figure 60: Cases of errors in the physics part of VisScience for several classic close-source models.
-----
**_Question_**
In the experiment "Estimating the Size of Molecules Using the Oil Film Method," 4 mL of pure oleic acid
solution is mixed thoroughly with 20 L of anhydrous alcohol solution. Using a syringe, 1 mL of this mixture
can be uniformly dispensed into 50 drops. One of these drops is then dropped into a shallow dish filled with
water. After the water surface stabilizes, a glass plate is placed on the shallow dish, and the outline of the oil
film is traced on the glass plate. The glass plate is then placed on graph paper, as shown in the figure. Each
small square on the graph paper is 10 mm on each side. Answer the following questions:
(1) What is the approximate area of the oleic acid film in square meters?
(2) Given the volume of pure oleic acid in each drop of the oleic acid-alcohol solution, estimate the diameter
of an oleic acid molecule in meters. (Round the results to one significant figure.)
**_Ground Truth:(1)8×10^-3 (2)5×10^-10_**
**_LLaVA-1.5_13B_** **_Knowledge Error_**
(1) To calculate the area of the oleic acid film, we need to know the length of the glass plate's outline. Unfortunately, the image does not provide this
information.
(2) To estimate the diameter of an oleic acid molecule, we need to know the volume of pure oleic acid in each drop of the oleic acid-alcohol solution.
The volume of a drop can be calculated using the formula:
Volume = (4/3)πr^3
where r is the radius of the drop. Since the drop is spherical, we can assume that the radius is equal to half the diameter of the drop.
The volume of one drop is 1mL, which contains 1 milliliter of pure oleic acid solution. To convert milliliters to liters, we divide by 1000:
1 mL = 0.001 L
To find the volume of pure oleic acid in one drop, we multiply the volume of the drop by the volume fraction of oleic acid in the solution:
Volume of pure oleic acid in one drop = 0.001 L × 0.4/20 L = 0.0002 L
The volume of pure oleic acid in one drop is 0.0002 liters. To convert liters to millimeters, we multiply by 1000:
0.0002 L = 2 mm^3
The radius of the drop is half the diameter, so the diameter is 4 times the radius:
Diameter = 4 × 2 mm = 8 mm
Finally, to convert the diameter to meters, we divide by 1000:
Diameter = 8 mm / 1000 = 0.008 m
The diameter of an oleic acid molecule is approximately 0.008 meters, or 800 millimeters. Rounded to one significant figure, the diameter is 0.8 m.
**_GLM-4V-9B_** **_Vision Recognition Error_**
To find the area of the oleic acid film, we can count the number of small squares within the perimeter of the film and multiply by the area of each small
square.
Counting the squares within the perimeter of the film, we find that there are approximately 25 squares. Each small square has a side length of 10mm, so
the area of each square is 10mm × 10mm = 100mm². Therefore, the area of the oleic acid film is 25 squares × 100mm²/square = 2500mm².
To estimate the diameter of an oleic acid molecule, we can use the volume of pure oleic acid in each drop of the solution and the area of the film. The
volume of pure oleic acid in each drop is 4mL × 1mL/drop = 4mL/drop. The area of the film is 2500mm².
The volume of the film is the volume of the drop multiplied by the number of drops, so the volume of the film is 4mL/drop × 50 drops = 200mL. The
area of the film is 2500mm².
We can set up the proportion to find the diameter of the oleic acid molecule:
Volume of film / Area of film = Volume of one molecule / Diameter of one molecule
200mL / 2500mm² = Volume of one molecule / (4πr²/3)
Now we can solve for the diameter:
Diameter of one molecule = (3Volume of film / 4πArea of film)^(1/3)
Diameter of one molecule = (3 × 200mL / 4π × 2500mm²)^(1/3)
Diameter of one molecule ≈ (3 × 200 / 4π × 2500)^(1/3)
**_InternVL 1.5_** **_Reasoning Error_**
(1) To estimate the area of the oleic acid film, we need to count the number of squares that the film covers on the coordinate paper. Each square has a
side length of 10mm, so the area of one square is 10mm x 10mm = 100mm². Since the film covers 10 squares horizontally and 8 squares vertically, the
total area of the film is 10 squares x 8 squares x 100mm²/square = 8,000mm².
To convert this to m², we divide by 10,000 (since there are 10,000mm in a m²): 8,000mm² / 10,000 = 0.8m².
(2) The volume of pure oleic acid in each drop is 1mL. To estimate the diameter of an oleic acid molecule, we can use the fact that the volume of a
sphere is given by the formula V = (4/3)πr³, where V is the volume and r is the radius of the sphere. We can rearrange this formula to solve for the
radius: r = (3V/4π)^(1/3).
Plugging in the volume of 1mL (which is equal to 0.001L), we get: r = (3 * 0.001L / 4π)^(1/3) ≈ 0.0003m^(1/3) ≈ 0.0003m.
Since the diameter is twice the radius, the diameter of an oleic acid molecule is approximately 0.0006m, which rounds to 0.001m when rounded to one
significant figure.
Figure 61: Cases of errors in the physics part of VisScience for several classic open-source models.
-----
**_Question_**
An interest group conducted an experiment where 10g of magnesium ribbon reacted with 49%
dilute sulfuric acid. The relationship between the mass of hydrogen gas produced and the mass
of dilute sulfuric acid is shown in the figure (assuming the magnesium ribbon contains no
impurities other than surface magnesium oxide).
Calculate the mass fraction of magnesium element in the magnesium ribbon (show the
calculation process).
**_Ground Truth:(1)0.5 (2)MgO + H₂SO₄→ MgSO₄ + H₂O (3)84%_**
**_GPT-4o_** **_Calculation Error_**
**Calculate the Mass Fraction of Magnesium Element in the Magnesium Strip**
1.The maximum mass of hydrogen produced is 0.5 g when using 70 g of 49% dilute sulfuric acid, which is: 70 g × 0.49 = 34.3 g
2.Using the stoichiometry of the reaction: Mg + H₂SO₄ → MgSO₄ + H₂
From the equation: 1 mol Mg → 1 mol H₂ and Mass of 1 mol H₂ = 2.016 g Mass of 1 mol Mg = 24.305 g
Given: 0.5 g H₂ × (24.305 g Mg / 2.016 g H₂) = 6.025 g Mg
3.Calculate the mass fraction of magnesium in the magnesium strip: Mass fraction of Mg = (Mass of Mg / Total mass of Mg strip) × 100% Mass fraction of Mg = (6.025 g / 10 g)
× 100% = 60.25%
Therefore, the mass fraction of magnesium in the magnesium strip is 60.25%.
**_Claude3.5_** **_Knowledge Error_**
To calculate the mass fraction of magnesium element in the magnesium strip:
Step 1: Write the balanced equation for the reaction of Mg with H2SO4
Mg + H2SO4 → MgSO4 + H2
Step 2: Calculate moles of H2 produced
Mass of H2 = 0.5g
Molar mass of H2 = 2 g/mol
Moles of H2 = 0.5g / 2 g/mol = 0.25 mol
Step 3: From the equation, we see that moles of Mg reacted = moles of H2 produced = 0.25 mol
Step 4: Calculate mass of Mg that reacted
Molar mass of Mg = 24 g/mol
Mass of Mg = 0.25 mol × 24 g/mol = 6g
Step 5: Calculate mass fraction
Mass fraction = Mass of Mg / Total mass of Mg strip
= 6g / 10g = 0.6 or 60%
Therefore, the mass fraction of magnesium element in the magnesium strip is 60% or 0.6.
**_Qwen-VL-Max_** **_Knowledge Error_**
To calculate the mass fraction of magnesium element in the magnesium ribbon, we need to know the masses of magnesium element and magnesium
oxide in the ribbon. Let x be the mass of magnesium element in the ribbon, then the mass of magnesium oxide is 10g - x.
From the figure, we know that when the mass of dilute sulfuric acid is 30g, the mass of hydrogen gas produced reaches the maximum value of 0.5g. At
this point, the magnesium element in the ribbon has completely reacted, leaving unreacted magnesium oxide.
According to the chemical equation:
Mg + H₂SO₄ → MgSO₄ + H₂↑
we can determine that the mass ratio of magnesium element to hydrogen gas is 24:2, which simplifies to 12:1.
Therefore, we have:
x / 0.5 = 12 / 1
Solving for x, we get:
x = 6g
So, the mass fraction of magnesium element in the magnesium ribbon is:
6g / 10g × 100% = 60%
**_GLM-4V_** **_Knowledge Error_**
Calculating the Mass Fraction of Magnesium Element in the Magnesium Strip
- To calculate the mass fraction of magnesium element in the magnesium strip, we need to know the total mass of the magnesium strip and the mass of
magnesium that reacted.
- However, since the graph only provides information about the relationship between hydrogen production and sulfuric acid consumption and doesn't
give the actual masses involved in the reaction, we cannot directly calculate the mass fraction using the given data.
- If we had the actual mass of the magnesium strip before and after the reaction, or the theoretical yield of hydrogen based on the stoichiometry of the
reaction, we could use those values to determine the mass fraction.
Without additional information, we can only speculate on the possible reasons for the lack of hydrogen production at 10g of dilute sulfuric acid and
cannot accurately calculate the mass fraction of magnesium.
Figure 62: Cases of errors in the chemistry part of VisScience for several classic close-source models.
-----
**_Question_**
An interest group conducted an experiment where 10g of magnesium ribbon reacted with 49%
dilute sulfuric acid. The relationship between the mass of hydrogen gas produced and the mass
of dilute sulfuric acid is shown in the figure (assuming the magnesium ribbon contains no
impurities other than surface magnesium oxide).
Calculate the mass fraction of magnesium element in the magnesium ribbon (show the
calculation process).
**_Ground Truth:(1)0.5 (2)MgO + H₂SO₄→ MgSO₄ + H₂O (3)84%_**
**_LLaVA-1.5_13B_** **_Knowledge Error_**
To calculate the mass fraction of magnesium element in the magnesium strip, we need to find the mass of magnesium in the strip and divide it by the
total mass of the strip. The mass of magnesium in the strip can be calculated by finding the mass of magnesium oxide formed when the strip reacts with
the dilute sulfuric acid. The equation for the reaction is:
Mg(s) + 2H2SO4(aq) → MgSO4(aq) + 2H2(g)
The mass of magnesium oxide formed is equal to the mass
**_GLM-4V-9B_** **_Reasoning Error_**
To calculate the mass fraction of magnesium element in the magnesium strip, we need to know the mass of hydrogen produced and the mass of
magnesium strip used. From the graph, we can see that when 10g of magnesium strip is reacted with 49% dilute sulfuric acid, the mass of hydrogen
produced is 0.1g. The mass of magnesium sulfate produced is 0.1g because the reaction is 1:1. The mass of magnesium sulfate is equal to the mass of
hydrogen produced, so:
Mass of magnesium sulfate = Mass of hydrogen produced = 0.1g
The mass of magnesium sulfate is 49% of the mass of dilute sulfuric acid used, so:
Mass of magnesium sulfate = 0.1g = 49% * 10g
Mass of magnesium sulfate = 4.9g
Now we can calculate the mass of magnesium used:
Mass of magnesium = Mass of magnesium sulfate / (100% - 49%)
Mass of magnesium = 4.9g / 51%
Mass of magnesium = 4.9g / 0.51
Mass of magnesium = 9.5g
Finally, to find the mass fraction of magnesium element in the magnesium strip, we divide the mass of magnesium by the total mass of the magnesium
strip:
Mass fraction of magnesium = Mass of magnesium / Total mass of magnesium strip
Mass fraction of magnesium = 9.5g / 10g
Mass fraction of magnesium = 0.95
So, the mass fraction of magnesium element in the magnesium strip is 0.95.
**_InternVL 1.5_** **_Knowledge Error_**
To calculate the mass fraction of magnesium in the magnesium strip, we need to know the mass of the magnesium strip and the mass of the impurities (if any).
Since the problem states that apart from the surface containing magnesium oxide, there are no other impurities in the magnesium strip, we can assume that the
mass of the magnesium strip is equal to the mass of the magnesium element plus the mass of the magnesium oxide formed on the surface.
Let Mg be the mass of the magnesium element and MgO be the mass of magnesium oxide. The mass of the magnesium strip is Mg + MgO. The mass fraction of
magnesium in the magnesium strip is:
Mass fraction of Mg = (Mg / (Mg + MgO))
To find the mass of the magnesium element, we can use the stoichiometry of the reaction. From the balanced chemical equation, we know that 1 mole of
magnesium reacts with 1 mole of sulfuric acid to produce 1 mole of hydrogen. If we let x be the mass of the magnesium element, then the mass of the hydrogen
produced is 0.5g. Using the molar mass of hydrogen (1.008 g/mol) and the stoichiometry of the reaction, we can write:
x / (Mg + MgO) = 0.5g / (1.008 g/mol)
Solving for x, we get:
x = 0.5g * (1.008 g/mol) / (Mg + MgO)
Since we don't have the exact mass of the magnesium strip or the mass of the magnesium oxide, we cannot calculate the exact mass fraction of magnesium.
However, we can express the mass fraction of magnesium in terms of the mass of the magnesium strip and the mass of the magnesium oxide:
Mass fraction of Mg = (0.5g * (1.008 g/mol)) / ((Mg + MgO) * (1.008 g/mol))
Figure 63: Cases of errors in the chemistry part of VisScience for several classic open-source models.
-----
**_Question_**
On summer nights, it is common to see children playing with glowing "magic
wands" in the square. The glowing principle of the "magic wand" is that
hydrogen peroxide oxidizes oxalate ester to produce energy, which is then
transferred to a fluorescent substance to emit fluorescence. The structural
formula of the oxalate ester (CPPO) is shown in the figure.
Which of the following statements is correct? ( )
A: The molecular formula of oxalate ester is C₂₆H₂₄Cl₆O₈.
B: 1 mol of oxalate ester reacts with dilute sodium hydroxide solution (the
halogen atoms on the benzene ring do not undergo hydrolysis), consuming a
maximum of 4 mol of NaOH.
C: The hydrolysis of oxalate ester can yield two types of organic compounds.
D: 1 mol of oxalate ester requires 6 mol of hydrogen for complete reaction.
**_Ground Truth:AD_**
**_GPT-4o_** **_Vision Recognition Error_**
Let's evaluate each statement considering these points:
Option A: Incorrect.
- The molecular formula of oxalyl ester is C26H18Cl6O8, not C26H24Cl6O8.
Option B: Correct.
- The oxalyl ester has 2 ester bonds (attached to each benzene ring), consuming at most 4 mol of NaOH for complete hydrolysis.
Option C: Correct.
- The hydrolysis would yield a carboxylic acid and an alcohol from the ester side.
Option D: Incorrect.
- The reaction consumes 6 mol of H2 to reduce the 6 chlorine atoms.
Thus, the correct statements are:
- B: 1 mol of oxalyl ester reacts with dilute sodium hydroxide solution (halogen atoms on the benzene ring do not hydrolyze), consuming at most 4 mol
of NaOH.
- C: The hydrolysis of oxalyl ester can yield two kinds of organic substances.
But, given only one choice is asked:
The correct statement is:
- B
**_Claude3.5_** **_Knowledge Error_**
Let's address each part of the question based on the information provided and the image of the apparatus:
(1) The role of concentrated sulfuric acid is a catalyst and dehydrating agent. If isotope 18O tracing method is used, the chemical equation indicating the
position of 18O would be:
CH3C18O18ONa + C2H5OH → CH3C18O18OC2H5 + NaOH
The 18O atoms from the acetate group remain in the ethyl acetate product.
(2) The role of spherical drying tube C is to dry the ethyl acetate vapor as it passes through, removing any water vapor.
If a few drops of phenolphthalein are added to D before the reaction, the solution appears red because of the presence of carbonate ions. The ionic
equation for this is:
CO32- + H2O ⇌ HCO3- + OHThe OH- ions cause the phenolphthalein to turn red.
The phenomenon in D after the reaction is that the red color fades or disappears. This is because the ethyl acetate produced reacts with the sodium
carbonate solution, neutralizing it:
2CH3COOC2H5 + Na2CO3 + H2O → 2CH3COONa + 2C2H5OH + H2CO3
**_Qwen-VL-Max_** **_Knowledge Error_**
The given information and options are as follows:
- The image shows the structural formula of CPPO, which is an oxalyl ester.
- The molecular formula of CPPO cannot be determined from the given image alone.
- Option A states that the molecular formula of oxalyl ester is C26H24Cl6O8, but this cannot be confirmed without further information.
- Option B states that 1 mol of oxalyl ester reacts with dilute sodium hydroxide solution, consuming at most 4 mol of NaOH. However, the exact number
of moles of NaOH required for complete reaction cannot be determined without knowing the structure of the compound.
- Option C states that the hydrolysis of oxalyl ester can yield two kinds of organic substances. This is possible if the ester functional group is hydrolyzed
to form carboxylic acid and alcohol groups.
- Option D states that 1 mol of oxalyl ester fully reacts with hydrogen gas, requiring 6 mol of hydrogen gas. However, the exact number of moles of
hydrogen gas required for complete reaction cannot be determined without knowing the structure of the compound.
Therefore, the correct answer is option C: The hydrolysis of oxalyl ester can yield two kinds of organic substances.
**_GLM-4V_** **_Knowledge Error_**
To answer your question, let's go through each option one by one based on the given information and our knowledge of organic chemistry.
A: The molecular formula of oxalyl ester is C26H24Cl6O8.
- According to the structural formula provided, we know that there are 2 chlorine atoms (Cl) and 8 oxygen atoms (O) in a molecule of oxalyl ester.
However, the number of carbon atoms (C) is not specified as 26. Therefore, the given molecular formula is incorrect.
B: 1 mol of oxalyl ester reacts with dilute sodium hydroxide solution (halogen atoms on the benzene ring do not hydrolyze), consuming at most 4 mol of
NaOH.
- Oxalyl ester is an acidic ester, so it can be hydrolyzed with a base like sodium hydroxide (NaOH). The reaction typically consumes sodium hydroxide
until all the ester groups have been converted into carboxylic acids. Since the figure does not provide specific information about the number of ester
groups or the stoichiometry of the reaction, we cannot determine the exact amount of sodium hydroxide consumed from the given data alone. However,
since the molecular formula provided for oxalyl ester is incorrect, we cannot accurately calculate the required amount of sodium hydroxide.
C: The hydrolysis of oxalyl ester can yield two kinds of organic substances.
- This statement is true. When oxalyl ester is hydrolyzed with sodium hydroxide, it breaks down into two organic substances: benzoic acid
(C6H5COOH) and sodium oxalate (Na2C2O4).
D: 1 mol of oxalyl ester fully reacts with hydrogen gas, requiring 6 mol of hydrogen gas.
- For the complete combustion of an organic substance like oxalyl ester, the theoretical yield of hydrogen gas would depend on the complete breakdown
of all carbon bonds in the compound. Since the figure does not provide the full structure of oxalyl ester, including how many carbon atoms are attached
to the benzene rings, we cannot calculate the exact number of moles of hydrogen gas needed for complete combustion. Typically, for a complete
combustion reaction, the number of moles of hydrogen gas required is equal to the sum of the coefficients in the balanced chemical equation for the
combustion reaction.
Based on the structural formula provided, we can see that oxalyl ester contains 2 chlorine atoms, 8 oxygen atoms, and some number of carbon atoms (C)
linked to benzene rings. If we knew the exact number of carbon atoms, we could potentially calculate the theoretical yield of hydrogen gas upon
complete combustion. However, without this information, we cannot confirm the accuracy of the statement that 1 mol of oxalyl ester requires 6 mol of
hydrogen gas for complete combustion.
In conclusion, the correct statement among the options is C: The hydrolysis of oxalyl ester can yield two kinds of organic substances.
Figure 64: Cases of errors in the chemistry part of VisScience for several classic close-source models.
-----
**_Question_**
On summer nights, it is common to see children playing with glowing "magic
wands" in the square. The glowing principle of the "magic wand" is that
hydrogen peroxide oxidizes oxalate ester to produce energy, which is then
transferred to a fluorescent substance to emit fluorescence. The structural
formula of the oxalate ester (CPPO) is shown in the figure.
Which of the following statements is correct? ( )
A: The molecular formula of oxalate ester is C₂₆H₂₄Cl₆O₈.
B: 1 mol of oxalate ester reacts with dilute sodium hydroxide solution (the
halogen atoms on the benzene ring do not undergo hydrolysis), consuming a
maximum of 4 mol of NaOH.
C: The hydrolysis of oxalate ester can yield two types of organic compounds.
D: 1 mol of oxalate ester requires 6 mol of hydrogen for complete reaction.
**_Ground Truth:AD_**
**_LLaVA-1.5_13B_** **_Knowledge Error_**
B: 1 mol of oxalyl ester reacts with dilute sodium hydroxide solution (halogen atoms on the benzene ring do not hydrolyze), consuming at most 4 mol of
NaOH.
**_GLM-4V-9B_** **_Knowledge Error_**
The correct statement regarding the oxalyl ester (CPPO) is option B. Let's analyze each option step by step:
A: The molecular formula of oxalyl ester is C26H24Cl6O8. This option is incorrect because the molecular formula of oxalyl ester is C4H2O4Cl2, not
C26H24Cl6O8.
B: 1 mol of oxalyl ester reacts with dilute sodium hydroxide solution (halogen atoms on the benzene ring do not hydrolyze), consuming at most 4 mol of
NaOH. This option is correct because oxalyl ester is an ester and will react with sodium hydroxide to form an alcohol and sodium oxalate. Since each
oxalyl ester molecule has two ester groups, it will consume 2 mol of NaOH for each ester group, resulting in a maximum of 4 mol of NaOH consumed
per mol of oxalyl ester.
C: The hydrolysis of oxalyl ester can yield two kinds of organic substances. This option is incorrect because the hydrolysis of oxalyl ester will yield only
one organic substance, which is the alcohol formed from the ester group.
D: 1 mol of oxalyl ester fully reacts with hydrogen gas, requiring 6 mol of hydrogen gas. This option is incorrect because the reaction of oxalyl ester
with hydrogen gas is a hydrogenation reaction, which will yield an alcohol and hydrogen chloride. The theoretical requirement for hydrogen gas would
be 2 mol for each double bond in the aldehyde group, resulting in a total of 4 mol of hydrogen gas required for 1 mol of oxalyl ester.
Therefore, the correct statement is option B.
**_InternVL 1.5_** **_Reasoning Error_**
A: The molecular formula of oxalyl ester is C26H24Cl6O8
B: 1 mol of oxalyl ester reacts with dilute sodium hydroxide solution (halogen atoms on the benzene ring do not hydrolyze), consuming at most 4 mol of
NaOH
C: The hydrolysis of oxalyl ester can yield two kinds of organic substances
D: 1 mol of oxalyl ester fully reacts with hydrogen gas, requiring 6 mol of hydrogen gas
The correct answer is B. The hydrolysis of oxalyl ester (CPPO) can yield two kinds of organic substances. The molecular formula of oxalyl ester is
C26H24Cl6O8, and 1 mol of oxalyl ester reacts with dilute sodium hydroxide solution (halogen atoms on the benzene ring do not hydrolyze),
consuming at most 4 mol of NaOH.
Figure 65: Cases of errors in the chemistry part of VisScience for several classic open-source models.
-----
**_Question_**
A certain extracurricular group designed a laboratory apparatus for the preparation of ethyl acetate, as shown in
the figure. Concentrated sulfuric acid is placed in A, ethanol and anhydrous sodium acetate are placed in B, and
saturated sodium carbonate solution is placed in D.
Given:
1. Anhydrous calcium chloride can form an insoluble compound with ethanol, CaCl₂·6C₂H₅OH.
2. Boiling points of related organic compounds:
- Reagent: Diethyl ether, Ethanol, Acetic acid, Ethyl acetate
- Boiling point/°C: 34.7, 78.5, 118, 77.1
Please answer the following:
(1) The role of concentrated sulfuric acid is ________; if the isotope ^18O tracer method is used to determine the
provider of the oxygen atom in the water molecule of the reaction product, write the chemical equation that
indicates the position of ^18O: __________.
(2) The function of the spherical drying tube C is ______________________. If a few drops of phenolphthalein
are added to D before the reaction, the solution turns red. The reason for this phenomenon is (expressed by the
ionic equation) ________; the phenomenon in D after the reaction is __________________.
**_Ground Truth:(1) Catalyst, dehydrating agent,CH₃COOH + C₂H₅¹⁸OH → CH₃CO¹⁸OC₂H₅ + H₂O_**
(2) To prevent backflow, condensation,CO + H₂O ⇌ HCO + OH⁻,The solution separates into layers,
with the upper layer being a colorless oily liquid, and the lower solution becoming lighter in color.
**_GPT-4o_** **_Knowledge Error_**
(1) The role of concentrated sulfuric acid is as a catalyst and dehydrating agent. The chemical equation indicating the position of ^18O when using
isotope ^18O tracing is:
CH₃COONa + C₂H₅¹⁸OH → CH₃COOC₂H₅ + Na¹⁸OH
(2) The role of the spherical drying tube C is to remove moisture from the gas. If a few drops of phenolphthalein are added to D before the reaction, the
solution appears red because:
CO₃²⁻ + H₂O → HCO₃⁻ + OH⁻
The phenomenon in D after the reaction is that the red color dissipates due to the neutralization reaction resulting in the formation of CO₂:
H⁺ (from H₂SO₄) + CO₃²⁻ → H₂O + CO₂
**_Claude3.5_** **_Knowledge Error_**
Let's address each part of the question based on the information provided and the image of the apparatus:
(1) The role of concentrated sulfuric acid is a catalyst and dehydrating agent. If isotope 18O tracing method is used, the chemical equation indicating the
position of 18O would be:
CH3C18O18ONa + C2H5OH → CH3C18O18OC2H5 + NaOH
The 18O atoms from the acetate group remain in the ethyl acetate product.
(2) The role of spherical drying tube C is to dry the ethyl acetate vapor as it passes through, removing any water vapor.
If a few drops of phenolphthalein are added to D before the reaction, the solution appears red because of the presence of carbonate ions. The ionic
equation for this is:
CO32- + H2O ⇌ HCO3- + OHThe OH- ions cause the phenolphthalein to turn red.
The phenomenon in D after the reaction is that the red color fades or disappears. This is because the ethyl acetate produced reacts with the sodium
carbonate solution, neutralizing it:
2CH3COOC2H5 + Na2CO3 + H2O → 2CH3COONa + 2C2H5OH + H2CO3
**_Qwen-VL-Max_** **_Knowledge Error_**
(1) The role of concentrated sulfuric acid is to act as a catalyst for the esterification reaction between ethanol and acetic acid. Using isotopic tracing with
18O, the chemical equation showing the position of 18O would be:
CH3CO18OH + C2H5OH → CH3COOC2H5 + H18O
(2) The role of spherical drying tube C is to prevent the entry of moisture into the system, which could interfere with the reaction or cause unwanted side
reactions.
When phenolphthalein is added to D before the reaction, it turns red due to the presence of excess sodium carbonate, which is basic. The ionic equation
for this reaction is:
CO2(aq) + H2O(l) + CO3^2-(aq) → 2HCO3^-(aq)
After the reaction, the solution in D will turn colorless because the ethyl acetate formed will react with the sodium carbonate, consuming the excess base
and neutralizing the solution.
**_GLM-4V_** **_Knowledge Error_**
Solution:
(1) Since the boiling point of ethanol is 78.5°C and that of ethyl acetate is 77.1°C, we can use the difference in boiling points to separate them. Adding
concentrated sulfuric acid can increase the boiling point of the ethanol-water mixture, which can then be distilled to separate ethanol and water.
Therefore, the role of concentrated sulfuric acid is to increase the boiling point of ethanol-water mixtures. The chemical equation for the reaction
between ethanol and sulfuric acid is C2H5OH + H2SO4 → C2H5OSO4↓ + H2O. When 18O tracing method is used, the oxygen atom in the water
comes from the 18O-labeled ethanol. Therefore, the chemical equation is C2H5OH18 + H2SO4 → C2H5OSO4↓ + H2O18.
(2) The saturated sodium carbonate solution can absorb water, so when the hot ethanol-acetic acid mixture is poured into the flask, it will not overflow.
Therefore, the role of spherical drying tube C is to absorb excess water.
The phenolphthalein solution turns pink when there are enough hydrogen ions in solution. In the presence of sodium carbonate, the hydrogen ions from
the reaction are consumed, so no color change occurs. After the reaction, the pH of the solution increases, causing the phenolphthalein solution to turn
pink.
Figure 66: Cases of errors in the chemistry part of VisScience for several classic close-source models.
-----
**_Question_**
A certain extracurricular group designed a laboratory apparatus for the preparation of ethyl acetate, as shown in
the figure. Concentrated sulfuric acid is placed in A, ethanol and anhydrous sodium acetate are placed in B, and
saturated sodium carbonate solution is placed in D.
Given:
1. Anhydrous calcium chloride can form an insoluble compound with ethanol, CaCl₂·6C₂H₅OH.
2. Boiling points of related organic compounds:
- Reagent: Diethyl ether, Ethanol, Acetic acid, Ethyl acetate
- Boiling point/°C: 34.7, 78.5, 118, 77.1
Please answer the following:
(1) The role of concentrated sulfuric acid is ________; if the isotope ^18O tracer method is used to determine the
provider of the oxygen atom in the water molecule of the reaction product, write the chemical equation that
indicates the position of ^18O: __________.
(2) The function of the spherical drying tube C is ______________________. If a few drops of phenolphthalein
are added to D before the reaction, the solution turns red. The reason for this phenomenon is (expressed by the
ionic equation) ________; the phenomenon in D after the reaction is __________________.
**_Ground Truth:(1) Catalyst, dehydrating agent,CH₃COOH + C₂H₅¹⁸OH → CH₃CO¹⁸OC₂H₅ + H₂O_**
(2) To prevent backflow, condensation,CO + H₂O ⇌ HCO + OH⁻,The solution separates into layers,
with the upper layer being a colorless oily liquid, and the lower solution becoming lighter in color.
**_LLaVA-1.5_13B_** **_Knowledge Error_**
(1) The role of concentrated sulfuric acid is to react with ethanol to produce ethyl acetate and water. The chemical equation for this reaction is:
C2H5OH + H2SO4 → C2H5OAc + H2O
If isotope 18O tracing method is used to determine the provider of oxygen atoms in the water molecules produced, the chemical equation would be:
C2H5OH + H2SO4 → C2H5OAc + 18O-H2O
(2) The role of spherical drying tube C is to dry the ethyl acetate separated from D. If a few drops of phenolphthalein are added to D before the reaction,
the solution appears red due to the color change of phenolphthalein in the presence of acidic or basic solutions. The chemical equation for this reaction
is:
C2H5OAc + NaOH → C2H5OAc-Na+ + H2O
After the reaction, the solution in D will be clear and colorless, indicating the removal of acidic components.
**_GLM-4V-9B_** **_Knowledge Error_**
(1) The role of concentrated sulfuric acid is to act as a catalyst for the esterification reaction. If isotope 18O tracing method is used to determine the
provider of oxygen atoms in the water molecules produced, the chemical equation indicating the position of 18O is:
H₂O(18O) + CH₃CH₂OH → CH₃COO⁻(18O) + CH₃CH₂OH(18O)
(2) The role of spherical drying tube C is to remove any water vapor that might condense and react with the ethanol or ethyl acetate, which could lead to
the formation of ethers. If a few drops of phenolphthalein are added to D before the reaction, the solution appears red; the reason for this phenomenon
(expressed in ionic equation) is that the saturated sodium carbonate solution is basic and can react with phenolphthalein to form a red complex. The
phenomenon in D after the reaction is that the solution turns colorless because the phenolphthalein indicator is neutralized by the produced ethyl acetate,
which is a weaker acid than sodium carbonate.
**_InternVL 1.5_** **_Reasoning Error_**
(1) The role of concentrated sulfuric acid is to act as a dehydrating agent, removing water from the reaction mixture. If isotope 18O tracing method is
used to determine the provider of oxygen atoms in the water molecules produced, the chemical equation indicating the position of 18O would be:
C2H5OH + CH3COOH → C2H5OCH3 + H2O
In this reaction, the oxygen atoms in the water molecule (H2O) would come from the oxygen atoms in the ethanol (C2H5OH) and the acetic acid
(CH3COOH).
Figure 67: Cases of errors in the chemistry part of VisScience for several classic open-source models.
-----
**_Question_**
A certain experimental group conducted the following analysis on an unknown solution containing Al³⁺:
(1) A small amount of sodium hydroxide was added, with no obvious change observed;
(2) More sodium hydroxide solution was added, resulting in a white precipitate;
(3) An excess amount of sodium hydroxide was added, and the white precipitate significantly decreased.
The experimental group performed a quantitative analysis and obtained the relationship between the amount of
precipitate and the volume of sodium hydroxide added, as shown in the figure.
Which of the following statements is incorrect?
A: The unknown solution contains at least three types of cations.
B: The molarity of the added sodium hydroxide solution is 5 mol·L⁻¹.
C: If the other ion is a divalent cation, then a = 10.
D: If the final precipitate is filtered, washed, and calcined, its mass must be 6 g.
**_Ground Truth:D_**
**_GPT-4o_** **_Reasoning Error_**
Here are the detailed steps and reasoning regarding each statement:
1. Unknown Solution Analysis:
2. Quantitative Relationship from the Graph:
- Maximum precipitate mol: 0.20 mol
- Corresponding volume of NaOH: 100 mL
3. Evaluation of Statements:
(A) The unknown solution contains at least 3 cations.
conclusively determined. Therefore, this statement can be incorrect.
(B) The concentration of the added NaOH solution is 5 mol·L⁻¹.
To find the incorrect statements, we need to delve into the chemical principles demonstrated and analyze the data provided in the graph systematically.
⁺ ions which precipitate as Al(OH)₃ when NaOH is added.
⁻ ions are not sufficient to exceed the solubility product (Ksp) of Al(OH)
₃ appears, indicating the concentration of OH⁻
Excess NaOH: The white precipitate dissolves due to the formation of soluble complex ion [Al(OH)₄]⁻.
The graph shows the relationship between the precipitate amount (in moles) and the NaOH added volume (in mL):
This is not necessarily true based on the given information. The data provided only indicates the presence of Al³
Given that 0.20 mol of Al(OH)₃ precipitate appears when 100 mL of NaOH is added, and knowing the stoichiometry of the reaction: Al³⁺ + 3OH⁻ → Al(OH)₃
0.2 mol Al³⁺ corresponds with 3 * 0.2 mol = 0.6 mol OH⁻. For 100 mL NaOH to provide 0.6 mol OH⁻, the concentration of NaOH must indeed be:
Concentration = 0.6 mol / 0.1 L = 6 mol·L⁻¹
This conflicts with 5 mol·L⁻¹. Thus, this statement is indeed incorrect.
(C) If another ion is a divalent cation, then a = 10.
For divalent cation, such as M²⁺, precipitation occurs as M(OH)₂. Considering the graph where maximum precipitate occurs around 0.15 mol at some volume 'a', assuming it’s
from M(OH)₂: M²⁺ + 2OH⁻ → M(OH)₂
At point 'a', V_a * C_NaOH ≥ (0.15 * 2 = 0.3 mol OH) / 5 mol·L⁻¹ = 0.06 L NaOH = 60 mL
Thus a ≈ 60 mL, not 10 mL, making this statement incorrect.
(D) If the final precipitate is filtered, washed, and ignited, its mass is certainly 6 g.
Upon ignition, Al(OH)₃ decomposes into Al₂O₃: 2Al(OH)₃ → Al₂O₃ + 3H₂O
The molar mass of Al₂O₃ is 102 g/mol. With 0.20 mol of Al(OH)₃ yielding 0.10 mol Al₂O₃: 0.10 mol * 102 g/mol = 10.2 g
Thus, this treatment yields about 10.2 g of Al₂O₃ and not 6 g, making this statement incorrect.
Conclusion: The following statements are all incorrect: A, B, C, and D.
E: None of the statements are correct
**_Claude3.5_**
**_Knowledge Error_**
Let's analyze each statement based on the information provided and the graph:
A. The unknown solution contains at least 3 cations:
This statement is likely correct. The behavior described (initial precipitation followed by dissolution in excess base) is characteristic of amphoteric hydroxides like Al(OH)3. The
graph shows a single precipitate forming and then partially dissolving, which is consistent with the presence of Al3+. There's no clear evidence of other cations.
B. The concentration of the added NaOH solution is 5 mol·L-1:
We can't determine this from the given information. The graph shows volume of NaOH added, not its concentration.
C. If another ion is a divalent cation, then a = 10:
We don't have enough information to confirm or deny this statement. The value of 'a' isn't directly related to the presence of a divalent cation based on the given data.
D. If the final precipitate is filtered, washed, and ignited, its mass is certainly 6 g:
This statement is incorrect. The graph shows the amount of precipitate in moles, not grams. At the end point, there's 0.15 mol of precipitate. Without knowing the exact
composition of the precipitate after ignition, we can't determine its mass.
Given the question asks which statements are incorrect, the answer should include B and D. Statement A appears to be correct based on the information given, and C cannot be
confirmed or denied with the available data.
Figure 68: Cases of errors in the chemistry part of VisScience for several classic close-source models.
-----
**_Question_**
A certain experimental group conducted the following analysis on an unknown solution containing Al³⁺:
(1) A small amount of sodium hydroxide was added, with no obvious change observed;
(2) More sodium hydroxide solution was added, resulting in a white precipitate;
(3) An excess amount of sodium hydroxide was added, and the white precipitate significantly decreased.
The experimental group performed a quantitative analysis and obtained the relationship between the amount of
precipitate and the volume of sodium hydroxide added, as shown in the figure.
Which of the following statements is incorrect?
A: The unknown solution contains at least three types of cations.
B: The molarity of the added sodium hydroxide solution is 5 mol·L⁻¹.
C: If the other ion is a divalent cation, then a = 10.
D: If the final precipitate is filtered, washed, and calcined, its mass must be 6 g.
**_Ground Truth:D_**
**_Qwen-VL-Max_** **_Knowledge Error_**
The image shows a graph that plots the amount of precipitation against the volume of sodium hydroxide (NaOH) added to an unknown solution. The x-axis represents the volume
of NaOH added in mL, while the y-axis indicates the amount of precipitation in moles.
From the graph, we can see that when a small amount of NaOH is added, there is no visible change in the solution. As more NaOH is added, a white precipitate appears. When an
excess of NaOH is added, the white precipitate decreases significantly.
Based on this information, let's analyze each statement:
A: The unknown solution contains at least 3 cations.
This statement is correct because the addition of a small amount of NaOH does not cause any reaction, indicating that some ions do not react with NaOH. Then, a white
precipitate forms upon further addition of NaOH, suggesting the presence of at least one cation that reacts with NaOH. Finally, the decrease in the precipitate after adding an
excess of NaOH implies the existence of another cation that also reacts with NaOH but has a different behavior than the first one. Therefore, it is reasonable to conclude that the
solution contains at least three cations.
B: The concentration of the added NaOH solution is 5 mol·L^-1.
This statement is incorrect. Although the graph provides data points for the amount of precipitation and the volume of NaOH added, it does not provide enough information to
determine the concentration of the NaOH solution. To calculate the concentration, we would need additional details such as the initial volume or mass of the NaOH solution used.
C: If another ion is a divalent cation, then a = 10.
This statement is correct. According to the graph, the volume of NaOH required to form the maximum amount of precipitation is around 100 mL. Since the graph shows that the
amount of precipitation increases linearly with the volume of NaOH added until reaching a maximum value, we can assume that the reaction follows a simple stoichiometry. If the
other ion is a divalent cation, it will require twice as much NaOH to form the same amount of precipitation compared to a monovalent cation. Thus, if the volume of NaOH
needed to reach the maximum precipitation was 100 mL for a monovalent cation, it would be 200 mL for a divalent cation. However, since the graph shows that the maximum
precipitation occurs at around 100 mL, we can infer that the other ion must be a monovalent cation. In this case, a = 10, which corresponds to the point where the slope changes
from positive to negative.
D: If the final precipitate is filtered, washed, and ignited, its mass is certainly 6 g.
This statement is incorrect. While the graph shows that the maximum amount of precipitation formed is approximately 0.15 mol, we cannot directly convert this into grams
without knowing the molar mass of the precipitated compound. Furthermore, filtering, washing, and igniting the precipitate may result in losses due to incomplete recovery or
decomposition during heating. Therefore, we cannot confidently predict the exact mass of the final product based solely on the given information.
In conclusion, the correct answer is B: The concentration of the added NaOH solution is 5 mol·L^-1. This statement is incorrect because there is insufficient information provided
to determine the concentration of the NaOH solution.
**_GLM-4V_**
**_Reasoning Error_**
Solution: According to the graph, when 10 mL of NaOH is added, the precipitation does not dissolve, indicating that the concentration of aluminum ions is less than 0.20 mol/
L. When 11 mL of NaOH is added, the precipitation begins to dissolve, which means that the concentration of aluminum ions is greater than 0.20 mol/L but less than 0.25 mol/
L. Since the initial volume of the solution is 100 mL and the final volume after adding NaOH is 110 mL, the total volume of the solution is 210 mL. Therefore, the
concentration of sodium hydroxide is 0.20 mol/L. Thus, option B is incorrect.
When 11 mL of NaOH is added, the precipitation begins to dissolve, which means that the concentration of aluminum ions is greater than 0.20 mol/L but less than 0.25 mol/L.
Therefore, the concentration of sodium hydroxide is 0.20 mol/L. Thus, option B is incorrect.
The mass of the precipitate before heating is 8 g, so the mass of the precipitate after heating is 6 g. Therefore, option D is incorrect.
Therefore, the answer is C.
Answer: C
Figure 69: Cases of errors in the chemistry part of VisScience for several classic close-source models.
-----
**_Question_**
A certain experimental group conducted the following analysis on an unknown solution containing Al³⁺:
(1) A small amount of sodium hydroxide was added, with no obvious change observed;
(2) More sodium hydroxide solution was added, resulting in a white precipitate;
(3) An excess amount of sodium hydroxide was added, and the white precipitate significantly decreased.
The experimental group performed a quantitative analysis and obtained the relationship between the amount of
precipitate and the volume of sodium hydroxide added, as shown in the figure.
Which of the following statements is incorrect?
A: The unknown solution contains at least three types of cations.
B: The molarity of the added sodium hydroxide solution is 5 mol·L⁻¹.
C: If the other ion is a divalent cation, then a = 10.
D: If the final precipitate is filtered, washed, and calcined, its mass must be 6 g.
**_Ground Truth:D_**
**_LLaVA-1.5_13B_**
**_GLM-4V_**
**_Knowledge Error_**
The incorrect statement is B: The concentration of the added NaOH solution is 5 mol·L⁻
**_Knowledge Error_**
The graph shows that the precipitate decreases after the addition of an excess of sodium hydroxide, which indicates that the precipitate is not due to the presence of a single cation.
Therefore, option A is incorrect as it states that the unknown solution contains at least 3 cations.
The graph does not provide any information about the concentration of the NaOH solution. It only shows the volume of NaOH added and the corresponding change in the
precipitate. Therefore, option B is incorrect as it states the concentration of the NaOH solution is 5 mol·L^-1.
Option C is incorrect because the graph does not provide enough information to determine the value of a if another ion is a divalent cation.
Option D is correct because if the precipitate is filtered, washed, and ignited, its mass will be the mass of the hydrated oxide, which is the product of the original cation and the
hydroxide ion. Since the graph shows that the precipitate decreases after the addition of an excess of sodium hydroxide, it is reasonable to assume that the precipitate is hydrated
oxide. Therefore, the mass of the precipitate will be the product of the concentration of the original cation and the hydroxide ion, which is 6 g.
Therefore, the answer is D
**_InternVL 1.5_**
**_Knowledge Error_**
First, we need to understand the information represented in the chart. The chart shows the relationship between the amount of precipitate in an unknown solution and the volume of
NaOH added. According to the chart, the amount of precipitate reaches its maximum when the volume of NaOH added is between 0.15 mL and 0.20 mL.
Now, let's analyze each option:
A: The unknown solution contains at least three types of cations. This statement is correct because the increase in the amount of precipitate with the addition of NaOH indicates the
presence of at least three cations, as Al³⁺ requires three NaOH molecules to completely precipitate.
B: The concentration of the added NaOH solution is 5 mol·L⁻¹. This statement is incorrect because the chart does not provide any information about the concentration of the NaOH
solution.
C: If the other ion is a divalent cation, then a = 10. This statement is incorrect because the chart shows that the amount of precipitate reaches its maximum when the volume of NaOH
added is 0.15 mL, indicating the presence of a divalent cation, but the value of a is not 10.
D: If the final precipitate is filtered, washed, and calcined, its mass must be 6 g. This statement is incorrect because the chart does not provide any information about the mass of the
precipitate.
Therefore, option B is incorrect.
Figure 70: Cases of errors in the chemistry part of VisScience for several classic open-source models.
-----
| [
"Zhen, Yang",
"Jinhao, Chen",
"Zhihuan, Jiang",
"Zhengxiao, Du",
"Weihan, Wang",
"Bin, Xu",
"Yuxiao, Dong",
"Jie, Tang"
] | 2024-09-09T00:00:00 | ICLR 2025 Submission | false | 0 | 0 | null | http://arxiv.org/abs/2409.13730 | https://arxiv.org/abs/2409.13730 | https://www.semanticscholar.org/paper/32f49c85f75b6339a776d8012e24c00894f30c7d |
WISDOM: Progressive Curriculum Synthesis Makes LLMs Better Mathematical Reasoner | Large Language Models (LLMs) have demonstrated remarkable capabilities across a wide range of problem-solving tasks. Despite their success, LLMs still face significant challenges in complex reasoning, particularly with advanced mathematical problems. These problems require not only a deep understanding of task descriptions but also sophisticated logical and mathematical reasoning to determine the correct solution path, which is often lacking in the existing synthetic data. To address this gap, we introduce WISDOM, which draws inspiration from the human learning process and employs curriculum learning to gradually synthesize high-quality CoT data from easy to hard. Our goal is to guide LLM training and improve reasoning capabilities by progressively exposing models to increasingly challenging problems. Based on the synthesized data, we further fine-tune and develop the WISDOM series models, achieving significant improvements across multiple mathematical reasoning benchmarks. Notably, WISDOM-7B (DSMath) achieves a score of 62.4% on MATH, matching GPT-4’s performance with 2/30 correct answers on AIME2024. Furthermore, WISDOM-70B (Llama3) outperforms GPT-4 on AIME2024 with 3/30 correct answers, demonstrating its potential as a better mathematical reasoner. More data and models will be available at https://anonymous.4open.science/r/Wisdom-math-377B | null | # WISDOM: PROGRESSIVE CURRICULUM SYNTHESIS MAKES LLMS BETTER MATHEMATICAL REASONER
**Anonymous authors**
Paper under double-blind review
ABSTRACT
Large Language Models (LLMs) have demonstrated remarkable capabilities
across a wide range of problem-solving tasks. Despite their success, LLMs
still face significant challenges in complex reasoning, particularly with advanced
mathematical problems. These problems require not only a deep understanding
of task descriptions but also sophisticated logical and mathematical reasoning to
determine the correct solution path, which is often lacking in the existing synthetic data. To address this gap, we introduce WISDOM, which draws inspiration
from the human learning process and employs curriculum learning to gradually
synthesize high-quality CoT data from easy to hard. Our goal is to guide LLM
training and improve reasoning capabilities by progressively exposing models to
increasingly challenging problems. Based on the synthesized data, we further finetune and develop the WISDOM series models, achieving significant improvements
across multiple mathematical reasoning benchmarks. Notably, WISDOM-7B (DSMath) achieves a score of 62.4% on MATH, matching GPT-4’s performance with
2/30 correct answers on AIME2024. Furthermore, WISDOM-70B (Llama3) outperforms GPT-4 on AIME2024 with 3/30 correct answers, demonstrating its potential as a better mathematical reasoner. More data and models will be available
[at https://anonymous.4open.science/r/Wisdom-math-377B.](https://anonymous.4open.science/r/Wisdom-math-377B)
questions to be solved hard question as seed
**Hard Instruction Evolving**
**Weak Teacher** not solved **Critical Expert** not solved **Experts Consistency**
**Guiding** **Teaching** **Voting**
**WISDOM: Progressive Curriculum Synthesis**
Figure 1: An overview of the construction of WISDOM through progressive curriculum synthesis.
1 INTRODUCTION
**000**
**001**
**002**
**003**
**004**
**005**
**006**
**007**
**008**
**009**
**010**
**011**
**012**
**013**
**014**
**015**
**016**
**017**
**018**
**019**
**020**
**021**
**022**
**023**
**024**
**025**
**026**
**027**
**028**
**029**
**030**
**031**
**032**
**033**
**034**
**035**
**036**
**037**
**038**
**039**
**040**
**041**
**042**
**043**
**044**
**045**
**046**
**047**
**048**
**049**
**050**
**051**
**052**
**053**
While large language models (LLMs) have pushed the limits of various domains (Yang et al., 2024; AI@Meta, 2024;
Zhu et al., 2024), they are still struggling to handle complex reasoning tasks compared to human intelligence (Luo
et al., 2023; Yu et al., 2024; Tang et al., 2024), particularly
in mathematical reasoning. Ongoing research indicates that
high-quality training data can notably enhance the capabilities (Chen et al., 2024; Xia et al., 2024; Liu et al., 2024a) of
LLMs via instructional tuning (Chung et al., 2022). However,
open-source datasets only contain a relatively low proportion
of high-quality data. Interestingly, research shows that a carefully curated small subset of data samples can yield better results than using the entire dataset (Li et al., 2024d). Nevertheless, as the scale of the pretraining increases, the availabil
Figure 2: Comparison of LLMs
fine-tuned with datasets generated
by WISDOM and other synthesis
methods.
-----
**054**
**055**
**056**
**057**
**058**
**059**
**060**
**061**
**062**
**063**
**064**
**065**
**066**
**067**
**068**
**069**
**070**
**071**
**072**
**073**
**074**
**075**
**076**
**077**
**078**
**079**
**080**
**081**
**082**
**083**
**084**
**085**
**086**
**087**
**088**
**089**
**090**
**091**
**092**
**093**
**094**
**095**
**096**
**097**
**098**
**099**
**100**
**101**
**102**
**103**
**104**
**105**
**106**
**107**
ity of high-quality internet data for further instruction fine-tuning is diminishing. A practical approach to address this is through synthetic data generation. Recent studies (Liu et al., 2024b; Wang
et al., 2023b; Xu et al., 2024) have shown that synthesizing high-quality data from off-the-rack
raw datasets is feasible and effective for instruction tuning. Nevertheless, prior arts in mathematical data synthesis primarily focus on expanding the diversity and scale of instruction data, with
limited effort on examining the difficulty of the instruction task. In fact, arbitrary expansions on
the data scale actually decrease the task difficulty (Tong et al., 2024), leading to a suboptimal gain
in the reasoning capacity of LLM. Although the widely adopted Rejection Sampling can generate
data without reducing the difficulty level of the instructions, the reliance on ground truth limits its
broader applicability. Consequently, there remains significant potential to enhance both the diversity and complexity of synthesized mathematical tasks, thereby boosting the reasoning capabilities
of LLMs in more complex and challenging mathematical scenarios.
Inspired by the human learning process, in this work, we employ curriculum learning to guide the
model in generating data through the Easy to Hard Cyclic Iterative Process. As shown in Figure 1,
WISDOM is comprised of three main stages including Weak Teacher Guiding, Critical Expert Teach_ing and Experts Consistency Voting for problem-solving and a hard instruction evolving module._
For the Weak Teacher Guiding stage, we leverage a weak but cheap model such as DeepSeek-Coder
V2 to solve the given problems and verify the internal consistency via Chain of Thought(CoT) and
Program of Thought (PoT). For the Critical Expert Teaching stage, we use a stronger but expensive
model such as GPT-4o to give critical thought based on previous inconsistent CoT and PoT, then
give its refined CoT and PoT response. After that, most easy and medium-difficulty problems have
been filtered out. The remained hard questions are used for instruction evolving to update the seed
pool and experts majority voting. After several rounds, we obtain the synthetic dataset WISDOM,
which is used for supervised fine-tuning (SFT) of other base models.
Specifically, we use MATH (Hendrycks et al., 2021) and GSM8K (Cobbe et al., 2021) as seed
data and build a synthetic dataset of 1.48 million size WISDOM through a cost-effective progressive curriculum learning method from easy to hard, which encompasses problems and solutions
at various levels of difficulty. Experimental results demonstrate even small language models such
as Llama3-8b and DeepSeekMath-7B (DSMath-7B) can achieve competitive performance even exceeding Qwen2-72B and Llama3-70B. As illustrated in Figure 2, Llama3-8B with WISDOM achieves
59.7% on MATH, which is better than Llama3-70B-instruct and existing SOTA DART-Math. Additionally, the performance on out-of-domain tasks is noteworthy. Utilizing Llama3-8B, we successfully solved 17 out of 40 challenging problems, demonstrating competitiveness with Qwen272B-instruct. Moreover, the synthetic dataset generated through our methodology indicates further
enhancements for large-scale models, such as Llama3-70B and Qwen2-72B. The contributions are
summarized as follows:
- We propose a novel framework for synthesizing mathematical reasoning data, which evolves the
difficulty and quality of questions in a progressive manner from easy to hard. In terms of response
generation, our approach is more cost-effective, and achieves remarkable SOTA on in-domain and
out-of-domain tasks based on same size model.
- Compared with Rejection Sampling and Problems Rewrite, our method leverages the internal
consistency for validation and does not rely on ground truth, which is more effective for generating
high-quality responses to newly synthetic questions.
- We will open-source all our models and data to drive further advancements in the open-source
community for challenging mathematical reasoning tasks.
2 WISDOM: PROGRESSIVE CURRICULUM SYNTHESIS
2.1 OVERVIEW
As shown in Figure 3, we describe the workflow of our WISDOM method, which employs a curriculum learning strategy. This approach begins with the MATH and GSM8K datasets as seed data and
synthesizes problems and responses of progressively increasing difficulty. The difficulty progression
follows Easy to Hard Cyclic Iterative Process, a method rooted in curriculum learning principles that
systematically escalates problem complexity across three distinct learning stages within each round.
-----
**108**
**109**
**110**
**111**
**112**
**113**
**114**
**115**
**116**
**117**
**118**
**119**
**120**
**121**
**122**
**123**
**124**
**125**
**126**
**127**
**128**
**129**
**130**
**131**
**132**
**133**
**134**
**135**
**136**
**137**
**138**
**139**
**140**
**141**
**142**
**143**
**144**
**145**
**146**
**147**
**148**
**149**
**150**
**151**
**152**
**153**
**154**
**155**
**156**
**157**
**158**
**159**
**160**
**161**
|Topics|Col2|
|---|---|
|Prompt|Meta data: Topics, Skills, Knowledge bases|Hard Questions|
|---|---|---|
|Seed Pool Question1: …… Question2: …… Question3: …… Question4: …… Question5: …… Question6: …… Question7: …… Question8: ……|Col2|
|---|---|
|Weak Teacher Guiding Data|{Q1,CoT, PoT}; …; …;|
|---|---|
**Meta Data** **Topics** **Skills** **Knowledge Bases** **WISDOM**
Random sampling
**GSM8K** **MATH**
Initialization **Meta Data Generation** **Hard Instruction Evolving**
Prompt Meta data: Topics, Skills, Knowledge bases QuestionsHard Evolved Hard Questions
Seed Pool
Question1: ……
Question2: …… Seed pool updating: evolved hard questions to be solved
Question3: ……
Question4: …… **Weak Teacher Guiding** **Critical Expert Teaching** **Experts Consistency Voting**
Question5: ……Question6: …… Response1
Question7: …… Questions Response2
Question8: …… not solved Questions
not solved Response3
Response4
CoT PoT Critical thoughts Response5
CoT & PoT
Questions solved Questions solved Consistent results
**Weak TeacherGuiding Data** {Q1,CoT, PoT}; …; …; **Critical ExpertTeaching Data** {Q2,CoT, PoT};…;…; **Experts ConsistencyVoting Data** {Q3, Response2,3}
Figure 3: The overall workflow of WISDOM, which leverages Progressive Curriculum Synthesis to
generate questions and responses with Deepseek Coder V2 and GPT-4o, including weak teacher
guiding, critical expert teaching, experts consistency voting, and hard instruction evolving.
After five rounds of data synthesis, we obtain WISDOM (W = S1 _S2_ _S3), a high-quality, di-_
verse dataset specifically curated for complex tasks and formatted in the style of Chain-of-Thought ∪ _∪_
(CoT) datasets. S1 corresponds to the weak teacher guiding stage, S2 represents the critical expert
teaching stage, and S3 reflects the experts’ consistency voting stage. The data generation algorithm
is described in detail in Algorithm 1.
2.2 DIFFICULTY-AWARE PROGRESSIVE CURRICULUM SYNTHESIS
Previous data synthesis efforts (Yu et al., 2024; Tang et al., 2024; Luo et al., 2023; Li et al., 2024a;
Chan et al., 2024; Lu et al., 2024b) focused on enhancing diversity in mathematical reasoning but
often overlooked question difficulty. Recent studies (Chen et al., 2024; Xia et al., 2024; Liu et al.,
2024a) show that using a subset of high-difficulty data for instruction tuning enhances performance
compared to using the entire dataset. Inspired by human learning processes, curriculum learning
increases question difficulty from easy to hard, aligning with learners’ capabilities. We divide this
evolution into three stages: weak teacher guidance, critical expert teaching, and expert consistency
_voting, progressively enhancing question difficulty through multiple rounds of refinement._
Specifically, we start the data synthesis process with a set of seed data, which is the training set of
GSM8K and MATH. Firstly, we employ Deepseek Coder V2 (DeepSeek-AI et al., 2024), the weak
teacher to generate the answers {Ai : (ci, pi) : i = 1, . . ., N1} where each answer includes both
Chain of Thought (CoT) ci and Program of Thought (PoT) pi. However, as we all know, a weak
teacher can not solve all the given questions, therefore, we filter the answers based on the CoT and
PoT consistency (internal consistency). While the assumed solved problems by weak teachers denotes simple questions, retained in the first stage. The unsolved questions and the inconsistent CoT
and PoT answers are advanced to the next stage for further critical expert teaching. In the Critical
Expert Teaching Stage, following the principles of curriculum learning, the expert critically reviews
the problems that the weak model struggled to solve, providing a critique rj and supposed solution
_{Aj : (rj, cj, pj) : j = 1, . . ., N2}. If the solution of the expert demonstrates internal consistency,_
the problem and generated response are retained in this stage. Otherwise, the problem advances to
the next stage. In this stage, we leverage a more advanced model, such as GPT-4o to ensure solution
quality better than the weak teacher. In the Experts Consistency Voting Stage, multiple experts are
engaged to provide solutions _Ak : c[(]k[j][)][, p][(]k[j][)][;][ k][ = 1][, . . ., N][3][;][ j][ = 1][, . . ., E]_, using a majority voting approach to address the remaining much more challenging questions. Meanwhile, we generaten o
the meta-information related to these questions with the weak teacher, such as DeepSeek-Coder V2,
along with the newly synthesized questions in the hard instruction evolving module with expert, i.e.,
GPT-4o, serves as updated seed data for the next round.
-----
**162**
**163**
**164**
**165**
**166**
**167**
**168**
**169**
**170**
**171**
**172**
**173**
**174**
**175**
**176**
**177**
**178**
**179**
**180**
**181**
**182**
**183**
**184**
**185**
**186**
**187**
**188**
**189**
**190**
**191**
**192**
**193**
**194**
**195**
**196**
**197**
**198**
**199**
**200**
**201**
**202**
**203**
**204**
**205**
**206**
**207**
**208**
**209**
**210**
**211**
**212**
**213**
**214**
**215**
**Algorithm 1 WISDOM Dataset Construction**
**Input: Problems Q, Meta info M, Datasets R1 = ∅, R2 = ∅, R3 = ∅**
**Output: WISDOM Dataset W = R1 ∪R2 ∪R3**
1: for i = 1 to Rounds do
2: Initialize 1, 2, 3
3:4: **for qExtracti ∈Q S do m Si;** _S_ _←∅_ _mi_
_M ←M ∪{_ _}_
5: Generate ri = CoT(qi) + PoT(qi) via DeepSeek-Coder V2
6: **if CoT(qi) = PoT(qi) then**
7: 1 1 (qi, mi, CoT(qi))
8: **elseS** _←S_ _∪{_ _}_
9: 2 2 (qi, ri)
10: **end ifS** _←S_ _∪{_ _}_
11: **end for**
12: **for (qj, rj) ∈S2 do**
13: Generate rj[′] [using GPT-4o]
14: **if CoT(rj[′]** [) =][ PoT][(][r]j[′] [)][ then]
16:15: **elseS2 ←S2 ∪{(qj, mj, CoT(rj[′]** [))][}]
17: 3 3 _qj_
18: **end ifS** _←S_ _∪{_ _}_
19: **end for**
20:21: **for qGeneratek ∈S3 dork1, ..., rkn** ; Vote for rk[∗]
_{_ _}_
22: **if consistent rk[∗]** **[then]**
24:23: **end ifS3 ←S3 ∪{(qk, mk, CoT(rk[∗][))][}]**
25: **end for**
26: Cluster embeddings of S3 Meta info; Sample m1, m2, m3
27: Generate qnew via GPT-4o with m1, m2, m3 (40% mask); Q ←{qnew}
28: 1 1 1, 2 2 2, 3 3 3
29: end forR _←R_ _∪S_ _R_ _←R_ _∪S_ _R_ _←R_ _∪S_
30: return W = R1 ∪R2 ∪R3
Throughout the Cyclic Iterative process, question difficulty is progressively increased, not only
through the curriculum learning stages but also via a cyclic iterative process in each round, continually enhancing the difficulty of subsequent questions.
2.3 COSTS AND RESPONSE EFFICIENCY BALANCING
In synthesizing mathematical reasoning data, majority voting is an effective yet resource-intensive
method to improve accuracy. Given that the data synthesis process may generate many trivial and
vanilla problems (Tong et al., 2024), we apply majority voting specifically to evaluate and opt for
the most difficult questions. Following curriculum learning principles, we first employ a weak but
cost-effective teacher to solve a large number of easy problems. Subsequently, a strong but more
resource-intensive expert is used for medium-difficulty questions, thereby optimizing resource utilization. To ensure the quality of generated responses, Rejection Sampling is an effective way.
However, it relies heavily on the ground truth of seed data, making it less applicable to newly synthesized problems. In this work, we enforce unsupervised internal consistency to improve the quality
of responses generated with CoT and PoT. With the weak teacher and internal consistency, we can
filter out many simple, easily solvable problems early on, saving costs and improving efficiency.
Although we also apply majority voting in the final stage of curriculum learning to further increase
response accuracy, by this point, the data set has been significantly reduced in size and the difficulty
of the problem has been greatly enhanced.
-----
**216**
**217**
**218**
**219**
**220**
**221**
**222**
**223**
**224**
**225**
**226**
**227**
**228**
**229**
**230**
**231**
**232**
**233**
**234**
**235**
**236**
**237**
**238**
**239**
**240**
**241**
**242**
**243**
**244**
**245**
**246**
**247**
**248**
**249**
**250**
**251**
**252**
**253**
**254**
**255**
**256**
**257**
**258**
**259**
**260**
**261**
**262**
**263**
**264**
**265**
**266**
**267**
**268**
**269**
2.4 QUESTION DIVERSITY ENHANCED VIA META DATA GENERATION
Since data diversity plays a crucial role in instruction tuning, following existing work (Tang et al.,
2024; Li et al., 2024a; Chan et al., 2024), we utilize the meta-information of questions to enhance
the diversity of synthesized data. In mathematics, meta-information often refers to extracted skills
or topics from questions, and synthesizing data can be generated based on this extra information.
While existing methods (Tang et al., 2024; Huang et al., 2024) rely solely on skill or topic for
question synthesis, we argue that focusing only on these aspects can be overly simplistic and limit
the diversity of the synthesized questions. Instead, we extract a richer set of meta-information,
including Skill, Topic, Knowledge Base, and Similar Problem, and combine multiple pieces of metainformation from various questions to maximize the diversity of the synthesized questions.
Specifically, we first extract meta-information from the questions (e.g., q3) and obtain embeddings
from the knowledge base in the meta-information using OpenAI’s text-embedding-ada-002. We then
apply k-means++ (Arthur & Vassilvitskii, 2007) clustering to group these embeddings. Furthermore,
we randomly combine all of the aforementioned meta-information across different clusters and mask
the Knowledge Base and Similar Problem with a 40% probability to prompt GPT-4o-0513 to generate new questions. This strategy not only prevents overfitting on the synthesized data but also
significantly enhances the diversity of the generated questions.
**Example 2.1: Knowledge base contained Q-A Pair**
**Question: A biologist is tracking the growth of a bacterial culture that doubles in size every 3**
hours. If the initial size of the culture is 500 bacteria, how many bacteria will be present after
15 hours?
**Response:**
<knowledge_base_start>
Key Definitions and Explanations:. ..- Exponential Growth: ...
</knowledge_base_end>
<solution_start>
Step-by-Step Solution:1. Understand the Problem: ...The final answer is: boxed{16000}
</solution_end>
2.5 KNOWLEDGE-BASED CURRICULUM LEARNING
To further enhance the quality of synthesized data and improve the model’s reasoning abilities, the
current mainstream approach is to adopt the Chain of Thought (CoT) method, which transforms
simple seed data responses into structured, step-by-step reasoning processes and teaches the model
to think before solving. Inspired by the principles of curriculum learning, it is advantageous to recall key knowledge points relevant to the problem before answering a question. Thus, we integrate
the knowledge base from the meta-information, which includes Key Definitions and Explanations,
Relevant Formulas and Equations, Common Pitfalls and Misconceptions, and Additional Theoretical Insights, into the preamble of the CoT response. This step helps the model reinforce critical
knowledge points before proceeding to solve a given question. A simple illustrative example can be
found in Example 2.1, and more detailed examples can be found in Appendix D.
3 EXPERIMENTS
3.1 EVALUATION DATASETS
To evaluate curriculum learning in recent benchmarks, we select a diverse and challenging set
of in-domain and out-of-domain benchmarks for evaluation. In-domain datasets include GSM8K
(Cobbe et al., 2021) and MATH (Hendrycks et al., 2021) and out-of-domain datasets contain College
MATH (Tang et al. (2024)), OlympiadBench-Math (He et al. (2024)), TabMWP (Lu et al. (2024a)),
TheromQA (Chen et al. (2023)), AMC2023 and AIME2024. These selected datasets are to compre
-----
**270**
**271**
**272**
**273**
**274**
**275**
**276**
**277**
**278**
**279**
**280**
**281**
**282**
**283**
**284**
**285**
**286**
**287**
**288**
**289**
**290**
**291**
**292**
**293**
**294**
**295**
**296**
**297**
**298**
**299**
**300**
**301**
**302**
**303**
**304**
**305**
**306**
**307**
**308**
**309**
**310**
**311**
**312**
**313**
**314**
**315**
**316**
**317**
**318**
**319**
**320**
**321**
**322**
**323**
hensively assess the model’s reasoning ability to solve mathematically challenging problems across
various dimensions. More details of datasets can be found in Appendix A.3.
3.2 BASELINES
We select several representative state-of-the-art closed-source models and open-source models
for comparsion, including GPT-4o-0513, GPT-4-1106-preview, Claude-3-Opus, Llama3-series
(AI@Meta (2024)), DeepSeek-Math-Instruct(Shao et al. (2024))(denote DSMath), DeepSeekCoder-V2 (Zhu et al. (2024)), Qwen-2-72B-instruct (Yang et al. (2024)), MetaMath (Yu et al.
(2023)),MathScale (Tang et al., 2024), MAmmoTH2 (Yue et al. (2024b)), KPMath-Plus (Huang
et al. (2024)), DART-Math (Tong et al. (2024)), NuminaMath (Li et al. (2024c)), Mathstral(Team,
2024). Comparison of WISDOM and others can be found in Table 1.
Table 1: Comparison of WISDOM and other mathematical synthetic methods.
|Method|Cost Efficiency GT-Free w/o External Data Instruction Diversity Evolution Instruction Diffciulty Evolution|
|---|---|
|MetaMath MathScale MAmmoTH2 KPMath-Plus DartMath NuminaMath|✗ ✗ ✓ ✓ ✗ ✗ ✓ ✓ ✓ ✗ ✗ ✓ ✗ ✓ ✗ ✗ ✓ ✗ ✓ ✗ ✓ ✗ ✓ ✗ ✗ ✗ ✗ ✗ ✓ ✗|
|WISDOM (ours)|✓ ✓ ✓ ✓ ✓|
Table 2: Main results on in-domain benchmarks, GSM8K and MATH, and out-of-domain benchmarks, including College MATH, Olympiad, TabMWP, TheoremQA, AMC2023, and AIME2024.
We select the current well-performing LLMs to evaluate their test accuracy on these benchmarks.
Since KPMath-Plus is not open-sourced, the results are quoted from the corresponding paper.
**Method** **Base** **GSM8K** **MATH** **College[†]** **Olympiad** **TabMWP** **TheoremQA** **AMC2023** **AIME2024**
GPT-4o-0513 – 95.8 76.6 – – – – – 2/30
GPT-4-1106-preview – 91.4 64.3 – – – – – 1/30
Claude-3-Opus – 95.0 60.1 – – – – – 2/30
DeepSeek Coder V2 – 94.9 75.7 – – – – – 4/30
Mathstral **83.3** 54.3 36.7 **22.4** **82.8** 26.3 12/40 **1/30**
KPMath-Plus 82.1 46.8 – – 66.4 – – –
DART-Math 81.3 45.0 28.3 14.5 65.8 20.5 7/40 0/30
Mistral-7B
MAmmoTH2 67.4 34.2 31.0 9.8 26.8 26.7 6/40 1/30
MathScale 58.5 33.2 22.0 7.8 73.3 18.1 6/40 1/30
WISDOM 80.0 **56.4** **41.6** 21.9 72.3 **27.6** **15/40** **1/30**
Llama3-instruct 78.2 27.2 22.8 5.6 75.3 18.9 5/40 0/30
MetaMath 80.5 32.6 19.3 6.7 54.1 13.3 6/40 0/30
DART-Math 81.8 46.9 28.4 15.9 66.3 20.5 8/40 **1/30**
Llama3-8B
MAmmoTH2 69.6 33.4 32.3 8.1 43.8 **29.7** 7/40 0/30
MathScale 70.8 34.6 22.5 9.0 74.3 18.9 2/40 1/30
WISDOM **83.2** **59.7** **42.2** **25.6** **83.0** 28.6 **17/40** **1/30**
DSMath-instruct 82.0 46.3 38.1 13.6 76.7 31.9 7/40 1/30
MetaMath 76.5 37.2 27.3 10.7 67.1 13.9 10/40 0/30
KPMath-Plus 83.9 48.8 – – 78.7 – – –
DART-Math DSMath-7B **87.5** 53.9 40.7 20.0 82.9 31.5 8/40 0/30
NuminaMath 77.1 53.7 32.4 24.0 77.7 29.4 **12/40** 1/30
MathScale 62.7 33.4 23.0 8.1 71.3 24.5 4/40 0/30
WISDOM 83.3 **62.4** **45.0** **28.9** **85.7** **34.9** 11/40 **2/30**
Llama3-instruct Llama3-70B 93.1 50.4 40.3 17.6 89.9 34.1 8/40 2/30
Qwen2-instruct Qwen2-72B 93.6 69.3 46.8 35.3 92.4 42.0 17/40 **4/30**
DART-Math Llama3-70B 89.8 55.7 37.9 21.0 80.9 28.2 13/40 1/30
KPMath-Plus Qwen1.5-72B 87.0 58.3 – – 76.7 – – –
MetaMath Llama3-70B 88.0 44.9 31.9 11.6 – 21.9 – –
NuminaMath Qwen2-72B 91.5 66.9 42.1 33.6 86.7 29.0 13/40 **4/30**
WISDOM Llama3-70B 94.1 68.2 43.4 34.4 91.8 41.4 22/40 3/30
WISDOM Qwen2-72B **94.2** **76.1** **47.6** **39.1** **94.5** **45.4** **23/40** 2/30
_† In short of College MATH._
3.3 MAIN RESULTS
As shown in Table 2, we present the performance of WISDOM on in-domain and out-of-domain
datasets, which demonstrates it achieves strong results across all datasets, particularly excelling on
-----
**324**
**325**
**326**
**327**
**328**
**329**
**330**
**331**
**332**
**333**
**334**
**335**
**336**
**337**
**338**
**339**
**340**
**341**
**342**
**343**
**344**
**345**
**346**
**347**
**348**
**349**
**350**
**351**
**352**
**353**
**354**
**355**
**356**
**357**
**358**
**359**
**360**
**361**
**362**
**363**
**364**
**365**
**366**
**367**
**368**
**369**
**370**
**371**
**372**
**373**
**374**
**375**
**376**
**377**
challenging ones, such as AIME2024, AMC2023. Notably, WISDOM has set a new SOTA performance on the MATH dataset. Based on the same small model DSMath-7B, our method reaches
a significant milestone on the in-domain MATH dataset, surpassing the 60% threshold for the
first time with a score of 62.4%, compared to the previous SOTA DART-Math achieving 53.9%.
In addition, even for a weaker mathematical foundation model, we can achieve remarkable improvements. While based on the same base model mistral-7B, Mathstral achieved 54.3%, WISDOM
reaches 56.4%, marking a 2.1% improvement. Meanwhile, based on Llama3-8B, previous SOTA
DART-Math achieves 46.9%, WISDOM reaches 59.7%, marking a 12.8% improvement.
As shown in Table 2, on the different out-of-domain datasets, our method exhibits stronger generalization capabilities, enhancing the ability of smaller models to tackle challenging mathematical problems. Specifically, based on Llama3-8B, DSMath-7B, Qwen2-72B, and Llama3-70B, our
method WISDOM achieves new SOTA on College MATH, Olympiad, TabMWP. For more challenging tasks, such as TheoremQA, AMC2023 and AIME 2024, our WISDOM model demonstrates
outstanding performance, even when built on smaller model foundations. For instance, based on
Llama3-8B, we successfully solve 17 out of 40 questions whereas the current SOTA method on the
same model base only solved 8, which represents a relative 112.5%(17/8) improvement, matching
the performance of Qwen2-Instruct. Remarkably, on the AIME 2024 dataset, our method performs
on par with GPT-4o, suggesting the potential of smaller models in solving complex mathematical
problems with easy-to-hard curriculum learning. Even when using only synthetic data, the model
shows excellent performance on both in-domain and out-of-domain datasets, notably surpassing
GPT-4o on the AIME 2024 dataset. Furthermore, we conduct experiments on two larger models
(Llama3-70B and Qwen2-72B) to explore the upper limits of model performance. The results indicate that our approach remains effective at this scale. For example, by implementing easy-to-hard
curriculum learning on Llama3-70b, we improve the performance on the AMC2023 from 8/40 to
22/40, surpassing the performance of Qwen2-Instruct-72B.
Overall, the main results above clearly demonstrate that our proposed easy-to-hard curriculum learning based synthesis method (WISDOM) is highly effective in improving performance across both
small and large models.
3.4 THE IMPACT OF SCALING STAGE
As illustrated in Figure 4, we conduct several experiments on MATH to investigate the
impact of scaling stages in our data synthesis method. Initially, the DeepSeek-Math-7Bbase and LLaMA3-8B-base achieve accuracies
of only 36.2% and 20.5%, respectively. However, fine-tuning with seed data improved their
performance to 52.22% and 36.68%, resulting
in gains of 16.02 and 16.18 percentage points.
Figure 4: The accuracy of MATH in relation to
With the data synthesized from Weak Teacher the scaling effects across different stages.
Guiding, the models’ capabilities improve further to 58.56% (+6.34%) and 54.04% (+17.36%), respectively. After additional fine-tuning in Critical Expert Teaching, which primarily focused on medium-difficulty questions, performance increases to 60.52% (+1.96%) and 57.44% (+3.40%). Ultimately, in Experts Consistency Voting, the
model achieves accuracy of 62.44% (+1.92%) and 59.72% (+2.28%). These experimental results
clearly demonstrate the effectiveness of our data synthesis method in enhancing complex mathematical reasoning abilities at each phase for small-scale base models. More results on out-of-domain
datasets can be found in Appendix C.
3.5 THE IMPACT OF DIFFERENT STAGES
To validate the impact of each stage in the progressive curriculum synthesis process on enhancing
mathematical reasoning, we randomly sample 200k data points from each stage and perform supervised fine-tuning (SFT) in conjunction with the initial seed data, GSM8K and MATH training sets.
As shown in Table 3, after fine-tuning, we observe that the Experts Consistency Voting stage led to a
significant accuracy improvement on all challenging problems compared to the other stages, achiev
-----
**378**
**379**
**380**
**381**
**382**
**383**
**384**
**385**
**386**
**387**
**388**
**389**
**390**
**391**
**392**
**393**
**394**
**395**
**396**
**397**
**398**
**399**
**400**
**401**
**402**
**403**
**404**
**405**
**406**
**407**
**408**
**409**
**410**
**411**
**412**
**413**
**414**
**415**
**416**
**417**
**418**
**419**
**420**
**421**
**422**
**423**
**424**
**425**
**426**
**427**
**428**
**429**
**430**
**431**
Table 3: Ablation results on Llama3-8B fine-tuned across different stages of WISDOM.
|Seed S1 S2 S3|GSM8K MATH TheromQA CollegeMATH Olympiad AIME 2024 AMC 2023 TabMWP|
|---|---|
|✓ ✓ ✓ ✓ ✓ ✓|80.4 42.8 25.1 25.6 15.3 1/30 4/40 76.2 78.2 42.7 23.4 26.2 16.5 1/30 7/40 76.4 80.3 47.1 25.1 30.8 18.1 1/30 5/40 78.3|
ing 47.14%, 25.13%, 30.76%, 18.10%, 78.3% on MATH, TheromQA, CollegeMATH, Olympiad
and TabMWP respectively. However, the Critical Expert Teaching stage does not show substantial
improvement over the Weak Teacher Guiding. Actually, only a small portion of answers were modified during the Critical Expert Teaching stage. For most answers, CoT responses from the Weak
Teacher Guiding were retained, some were converted to PoT answers, and a few were completely
revised. This finding also explains the observed improvements on challenging datasets. Overall, as
the stages progress, the difficulty of the curriculum learning process steadily increases, contributing
to performance enhancement.
3.6 INFLUENCE OF KNOWLEDGE BASE ON ANSWERS
Table 4: Comparison of performance on different benchmarks with and without the inclusion of the
knowledge base. To validate the generalizability of the approach, experiments were conducted on
both Llama3 and DeepSeek-Math.
**Model** **GSM8K** **MATH** **CollegeMATH** **Olympiad** **TabMWP** **TheromQA** **AIME 2024** **AMC 2023**
WISDOM (Llama3-8B-base) 83.1 **59.7** **42.2** **25.6** **83.0** 28.6 1/30 **17/40**
w/o knowledge base **83.9** 59.6 41.7 25.3 82.1 **32.5** 1/30 9/40
WISDOM (DSMath-7B-base) **83.3** **62.4** **45.0** **28.9** **85.7** **34.9** **2/30** 11/40
w/o knowledge base 79.5 58.6 42.4 23.7 85.3 31.8 1/30 **12/40**
Table 5: Comparison of performance on MATH
with and without knowledge base using sample
data in Weak Teacher Guiding stage.
**data** **knowledge base** **Model** **Math**
Seed+S1(100k) ✗ Llama3-8B 39.0
Seed+S1(100k) ✓ Llama3-8B 43.1(+4.1%)
Seed+S1(200k) ✗ Llama3-8B 43.2
Seed+S1(200k) ✓ Llama3-8B 45.7(+2.5%)
Seed+S1(400k) ✗ Llama3-8B 46.8
Seed+S1(400k) ✓ Llama3-8B 49.0(+2.2%)
Table 6: Accuracy w/ and w/o consistency.
**Wrong** **Right** **Acc.**
**Number** **Number** **(%)**
w/o consistency 92,696 14,304 13.4
w/ consistency 79,966 27,034 25.3(+11.9)
To investigate the impact of knowledge base on learning within the curriculum learning process, we
conduct experiments to explore its effects at different scales. As shown in Table 5, we randomly
select data from the Weak Teacher Guiding stage and observe that as the data scale increases, the
results with knowledge base consistently outperform those without it, with a minimum improvement
of 2% on the MATH. However, we also notice that the rate of improvement decreases as the data
volume increases, prompting us to conduct a full-scale experiment. As shown in Table 4, we conduct
experiments on Llama3 and DeepSeek-Math, removing the knowledge base from the process. We
find that the difference in performance between the presence and absence of the knowledge base
is less pronounced when dealing with smaller datasets. However, as the data scale increases, the
contribution of the knowledge base to performance becomes more evident, though the differences
are not dramatic in most metrics. This may because the model can infer some of the knowledge
from the answers themselves, similar to how one might deduce knowledge points from the context
of an answer. We also observe that DeepSeek-Math is more efficient at learning from knowledge
points compared to Llama 3. We hypothesize that this is because the meta-information is generated
by DeepSeek-Coder V2, which may include pre-training data relevant to DeepSeek-Math, thereby
better activating the knowledge learned during its pre-training. The results indicate that scaling the
knowledge base to a million-level scale still yields improvements, underscoring the importance of
knowledge point learning in the curriculum learning process.
-----
**432**
**433**
**434**
**435**
**436**
**437**
**438**
**439**
**440**
**441**
**442**
**443**
**444**
**445**
**446**
**447**
**448**
**449**
**450**
**451**
**452**
**453**
**454**
**455**
**456**
**457**
**458**
**459**
**460**
**461**
**462**
**463**
**464**
**465**
**466**
**467**
**468**
**469**
**470**
**471**
**472**
**473**
**474**
**475**
**476**
**477**
**478**
**479**
**480**
**481**
**482**
**483**
**484**
**485**
(a) Weak Teacher Guiding (b) Critical Expert Teaching (c) Experts Consistency Voting
Figure 5: The top 5 topics and their corresponding frequencies after clustering in the three stages.
3.7 EFFECT OF ANSWER CONSISTENCY
We first sample 100k synthetic data points, each containing two different types of responses to the
same set of questions: one consistent and one inconsistent. To investigate their accuracy, we use
GPT-4o to generate reference answers for the 100k synthetic data points and compare the generated
responses with the reference answers to measure consistency. Given that GPT-4o is currently among
the most advanced models in terms of mathematical capabilities, we can reasonably assume that
consistency rates closely reflect accuracy. As shown in Table 6, consistent responses improve accuracy by 11.9%. To further explore the impact of response consistency on model training, we conduct
an ablation study by replacing all consistent data in Weak Teacher Guiding stage with data lacking
consistency. As illustrated in Table 8 in Appendix E, it is evident that the absence of consistency
resulted in a significant decline in performance across all datasets. Response consistency not only
enhances accuracy and helps increase the difficulty of synthesized instructions but also contributes
to improving the model’s mathematical reasoning abilities during training.
3.8 ANALYSIS
3.8.1 DATASETS EMBEDDING TOPICS
We conduct an in-depth analysis of the topics included in each stage of the problems, providing
direction for future synthesis of mathematical reasoning datasets. Specifically, we sample 100k data
points from each of the three stages, convert the knowledge base into embeddings using OpenAI’s
text-embedding-ada-002, and apply K-means++ (Arthur & Vassilvitskii, 2007) for clustering, resulting in 200 categories. For each category, we extract the central terms and identify the top five
topics along with their frequencies. As shown in Figure 5, In the initial stage, the data predominantly
features simple and clear topics, such as arithmetic and kinematics. As the difficulty increases, more
complex subjects emerge, including linear algebra, optimization, and complex numbers. In the final
stage, financial-related problems appear, possibly due to the model’s difficulty in handling decimal
precision in financial interest calculations, where the reasoning path is correct, but the answers are
wrong. Therefore, future synthesis of mathematical reasoning problems should aim for more refined
generation and optimization, particularly targeting similar topics while balancing computational precision and cognitive complexity.
3.8.2 COST SAVING
Curriculum learning not only facilitates a gradual learning process but also enables the strategic
allocation of more resources to difficult problems. This approach can significantly reduce costs
compared to the traditional majority voting method. We analyze the number of tokens used and the
overall expenditure, as illustrated in Figure 6. By calculating the average token consumption for
inputs and outputs within the sampled dataset, and applying an exchange rate of 1:7 between USD
and CNY, we determine the overall cost of the dataset based on the API pricing for DeepSeek and
GPT-4o-0513. Our analysis shows that our approach is 2.82 times more cost-effective compared to
majority voting, leading to a total savings of over 20,000 US dollars in overall expenditure. Our
method resulted in a substantial reduction in costs while maintaining strong outcomes, demonstrating the scalability and cost-effectiveness of our approach.
-----
**486**
**487**
**488**
**489**
**490**
**491**
**492**
**493**
**494**
**495**
**496**
**497**
**498**
**499**
**500**
**501**
**502**
**503**
**504**
**505**
**506**
**507**
**508**
**509**
**510**
**511**
**512**
**513**
**514**
**515**
**516**
**517**
**518**
**519**
**520**
**521**
**522**
**523**
**524**
**525**
**526**
**527**
**528**
**529**
**530**
**531**
**532**
**533**
**534**
**535**
**536**
**537**
**538**
**539**
Figure 6: The left figure illustrates the average number of input and output tokens at different stages,
while the right figure presents the monetary cost associated with input and output tokens, as well as
the total expenditure.
4 RELATED WORK
**Instruction Tuning. Instruction fine-tuning has been widely validated as an effective method for**
enhancing model capabilities. Previous work (Kang et al. (2024); Chen et al. (2024); Xia et al.
(2024); Li et al. (2024d)) has primarily focused on improving model performance through the careful
selection of high-quality data. However, recent literature (Tang et al. (2024); Chung et al. (2022);
Yue et al. (2024b); Tang et al. (2024)) suggests that increasing the volume of data can also contribute
to performance gains. Unlike efforts (Zhang et al. (2024); Muennighoff et al. (2023)) that aim to
broadly enhance general model capabilities, our instruction fine-tuning is specifically designed to
improve complex mathematical reasoning skills.
**Mathematical Reasoning. To enhance answer accuracy on challenging benchmarks, recent re-**
search (Gou et al. (2024); Zhou et al. (2024); Wang et al. (2024b)) has increasingly focused on
leveraging external tools to improve large language models’ (LLMs) ability to solve mathematical
problems and achieve higher scores. Most approaches utilize Program of Thought (PoT) methods,
employing code interpreters to compute the final result (numerous methods cited here). Additionally, some methods (Wang et al. (2023a); Aggarwal et al. (2023); Wang et al. (2024a); Shao et al.
(2024)) adopt self-consistency techniques, ensembling multiple outputs to achieve better results. In
contrast, we concentrate on improving the model’s intrinsic reasoning capabilities, relying solely on
Chain of Thought (CoT) approaches to develop the model’s inherent mathematical reasoning skills.
**Distillation. Training smaller student models using synthetic data generated by more powerful**
teacher models has been widely validated as an effective approach (Xu et al. (2024); Li et al. (2024e);
Wang et al. (2024c); Li et al. (2024b)) . In the domain of mathematical reasoning, this effectiveness has also been demonstrated (Yu et al. (2024); Yue et al. (2024a); Tang et al. (2024); Li et al.
(2024a); Azerbayev et al. (2024)). However, these methods still exhibit a significant gap in performance when tackling challenging mathematical problems compared to closed-source models. Our
work addresses this gap by employing curriculum learning, which not only synthesizes diverse and
challenging responses but also gradually generates problems with greater diversity and complexity,
thereby significantly narrowing the performance gap with closed-source models.
5 CONCLUSION
In this work, we propose WISDOM as a data-centric framework to enhance the mathematical reasoning abilities of LLMs. The key insight behind WISDOM is the use of progressive curriculum
synthesis, which evolves both questions and answers from easy to hard without relying on ground
truths. Compared to traditional majority voting, WISDOM is 2.82x more cost-effective. To validate
the effectiveness of the synthesized datasets, we fine-tune a series of open-sourced LLMs ranging
from 7B to 72B parameters. Experimental results show that the fine-tuned LLMs achieve significant
improvements over the base models, highlighting the generalization capabilities of WISDOM. Our
empirical findings also offer valuable insights into tackling challenging problems, paving the way
for future complex reasoning across various fields.
-----
**540**
**541**
**542**
**543**
**544**
**545**
**546**
**547**
**548**
**549**
**550**
**551**
**552**
**553**
**554**
**555**
**556**
**557**
**558**
**559**
**560**
**561**
**562**
**563**
**564**
**565**
**566**
**567**
**568**
**569**
**570**
**571**
**572**
**573**
**574**
**575**
**576**
**577**
**578**
**579**
**580**
**581**
**582**
**583**
**584**
**585**
**586**
**587**
**588**
**589**
**590**
**591**
**592**
**593**
REFERENCES
Aman Madaan Pranjal Aggarwal, Yiming Yang, and Mausam. Let’s sample step by step: Adaptiveconsistency for efficient reasoning and coding with llms. In Houda Bouamor, Juan Pino, and
Kalika Bali (eds.), Proceedings of the 2023 Conference on Empirical Methods in Natural Lan_guage Processing, EMNLP 2023, Singapore, December 6-10, 2023, pp. 12375–12396. Associ-_
ation for Computational Linguistics, 2023. doi: 10.18653/V1/2023.EMNLP-MAIN.761. URL
[https://doi.org/10.18653/v1/2023.emnlp-main.761.](https://doi.org/10.18653/v1/2023.emnlp-main.761)
AI@Meta. Llama 3 model card. 2024. [URL https://github.com/meta-llama/](https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md)
[llama3/blob/main/MODEL_CARD.md.](https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md)
David Arthur and Sergei Vassilvitskii. k-means++: the advantages of careful seeding. In Nikhil
Bansal, Kirk Pruhs, and Clifford Stein (eds.), Proceedings of the Eighteenth Annual ACM-SIAM
_Symposium on Discrete Algorithms, SODA 2007, New Orleans, Louisiana, USA, January 7-9,_
_2007, pp. 1027–1035. SIAM, 2007._ [URL http://dl.acm.org/citation.cfm?id=](http://dl.acm.org/citation.cfm?id=1283383.1283494)
[1283383.1283494.](http://dl.acm.org/citation.cfm?id=1283383.1283494)
Zhangir Azerbayev, Hailey Schoelkopf, Keiran Paster, Marco Dos Santos, Stephen Marcus McAleer,
Albert Q. Jiang, Jia Deng, Stella Biderman, and Sean Welleck. Llemma: An open language
model for mathematics. In The Twelfth International Conference on Learning Representa_tions, ICLR 2024, Vienna, Austria, May 7-11, 2024. OpenReview.net, 2024._ [URL https:](https://openreview.net/forum?id=4WnqRR915j)
[//openreview.net/forum?id=4WnqRR915j.](https://openreview.net/forum?id=4WnqRR915j)
Xin Chan, Xiaoyang Wang, Dian Yu, Haitao Mi, and Dong Yu. Scaling synthetic data creation with
1,000,000,000 personas. CoRR, abs/2406.20094, 2024. doi: 10.48550/ARXIV.2406.20094. URL
[https://doi.org/10.48550/arXiv.2406.20094.](https://doi.org/10.48550/arXiv.2406.20094)
Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang,
Vijay Srinivasan, Tianyi Zhou, Heng Huang, and Hongxia Jin. Alpagasus: Training a better alpaca with fewer data. In The Twelfth International Conference on Learning Represen_[tations, ICLR 2024, Vienna, Austria, May 7-11, 2024. OpenReview.net, 2024. URL https:](https://openreview.net/forum?id=FdVXgSJhvz)_
[//openreview.net/forum?id=FdVXgSJhvz.](https://openreview.net/forum?id=FdVXgSJhvz)
Wenhu Chen, Ming Yin, Max Ku, Pan Lu, Yixin Wan, Xueguang Ma, Jianyu Xu, Xinyi Wang, and
Tony Xia. Theoremqa: A theorem-driven question answering dataset. In Proceedings of the 2023
_Conference on Empirical Methods in Natural Language Processing, pp. 7889–7901, 2023._
Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi
Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai,
Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams
Yu, Vincent Y. Zhao, Yanping Huang, Andrew M. Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff
Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei. Scaling instructionfinetuned language models. CoRR, abs/2210.11416, 2022. doi: 10.48550/ARXIV.2210.11416.
[URL https://doi.org/10.48550/arXiv.2210.11416.](https://doi.org/10.48550/arXiv.2210.11416)
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser,
Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to
solve math word problems. arXiv preprint arXiv:2110.14168, 2021.
DeepSeek-AI, Qihao Zhu, Daya Guo, Zhihong Shao, Dejian Yang, Peiyi Wang, Runxin Xu, Y. Wu,
Yukun Li, Huazuo Gao, Shirong Ma, Wangding Zeng, Xiao Bi, Zihui Gu, Hanwei Xu, Damai
Dai, Kai Dong, Liyue Zhang, Yishi Piao, Zhibin Gou, Zhenda Xie, Zhewen Hao, Bingxuan Wang,
Junxiao Song, Deli Chen, Xin Xie, Kang Guan, Yuxiang You, Aixin Liu, Qiushi Du, Wenjun Gao,
Xuan Lu, Qinyu Chen, Yaohui Wang, Chengqi Deng, Jiashi Li, Chenggang Zhao, Chong Ruan,
Fuli Luo, and Wenfeng Liang. Deepseek-coder-v2: Breaking the barrier of closed-source models
in code intelligence. CoRR, abs/2406.11931, 2024. doi: 10.48550/ARXIV.2406.11931. URL
[https://doi.org/10.48550/arXiv.2406.11931.](https://doi.org/10.48550/arXiv.2406.11931)
Zhibin Gou, Zhihong Shao, Yeyun Gong, Yelong Shen, Yujiu Yang, Minlie Huang, Nan Duan, and
Weizhu Chen. Tora: A tool-integrated reasoning agent for mathematical problem solving. In
_The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna, Austria,_
-----
**594**
**595**
**596**
**597**
**598**
**599**
**600**
**601**
**602**
**603**
**604**
**605**
**606**
**607**
**608**
**609**
**610**
**611**
**612**
**613**
**614**
**615**
**616**
**617**
**618**
**619**
**620**
**621**
**622**
**623**
**624**
**625**
**626**
**627**
**628**
**629**
**630**
**631**
**632**
**633**
**634**
**635**
**636**
**637**
**638**
**639**
**640**
**641**
**642**
**643**
**644**
**645**
**646**
**647**
_[May 7-11, 2024. OpenReview.net, 2024. URL https://openreview.net/forum?id=](https://openreview.net/forum?id=Ep0TtjVoap)_
[Ep0TtjVoap.](https://openreview.net/forum?id=Ep0TtjVoap)
Chaoqun He, Renjie Luo, Yuzhuo Bai, Shengding Hu, Zhen Leng Thai, Junhao Shen, Jinyi Hu,
Xu Han, Yujie Huang, Yuxiang Zhang, et al. Olympiadbench: A challenging benchmark for
promoting agi with olympiad-level bilingual multimodal scientific problems. _arXiv preprint_
_arXiv:2402.14008, 2024._
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song,
and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. arXiv
_preprint arXiv:2103.03874, 2021._
Yiming Huang, Xiao Liu, Yeyun Gong, Zhibin Gou, Yelong Shen, Nan Duan, and Weizhu
Chen. Key-point-driven data synthesis with its enhancement on mathematical reasoning. CoRR,
[abs/2403.02333, 2024. doi: 10.48550/ARXIV.2403.02333. URL https://doi.org/10.](https://doi.org/10.48550/arXiv.2403.02333)
[48550/arXiv.2403.02333.](https://doi.org/10.48550/arXiv.2403.02333)
Dhiraj D. Kalamkar, Dheevatsa Mudigere, Naveen Mellempudi, Dipankar Das, Kunal Banerjee,
Sasikanth Avancha, Dharma Teja Vooturi, Nataraj Jammalamadaka, Jianyu Huang, Hector Yuen,
Jiyan Yang, Jongsoo Park, Alexander Heinecke, Evangelos Georganas, Sudarshan Srinivasan,
Abhisek Kundu, Misha Smelyanskiy, Bharat Kaul, and Pradeep Dubey. A study of BFLOAT16
[for deep learning training. CoRR, abs/1905.12322, 2019. URL http://arxiv.org/abs/](http://arxiv.org/abs/1905.12322)
[1905.12322.](http://arxiv.org/abs/1905.12322)
Feiyang Kang, Hoang Anh Just, Yifan Sun, Himanshu Jahagirdar, Yuanzhi Zhang, Rongxing Du,
Anit Kumar Sahu, and Ruoxi Jia. Get more for less: Principled data selection for warming up fine-tuning in llms. In The Twelfth International Conference on Learning Represen_[tations, ICLR 2024, Vienna, Austria, May 7-11, 2024. OpenReview.net, 2024. URL https:](https://openreview.net/forum?id=QmYNBVukex)_
[//openreview.net/forum?id=QmYNBVukex.](https://openreview.net/forum?id=QmYNBVukex)
Mario Michael Krell, Matej Kosec, Sergio P Perez, and Andrew Fitzgibbon. Efficient sequence
packing without cross-contamination: Accelerating large language models without impacting performance. arXiv preprint arXiv:2107.02027, 2021.
Chen Li, Weiqi Wang, Jingcheng Hu, Yixuan Wei, Nanning Zheng, Han Hu, Zheng Zhang, and
Houwen Peng. Common 7b language models already possess strong math capabilities. CoRR,
[abs/2403.04706, 2024a. doi: 10.48550/ARXIV.2403.04706. URL https://doi.org/10.](https://doi.org/10.48550/arXiv.2403.04706)
[48550/arXiv.2403.04706.](https://doi.org/10.48550/arXiv.2403.04706)
Haoran Li, Qingxiu Dong, Zhengyang Tang, Chaojun Wang, Xingxing Zhang, Haoyang Huang,
Shaohan Huang, Xiaolong Huang, Zeqiang Huang, Dongdong Zhang, Yuxian Gu, Xin Cheng,
Xun Wang, Si-Qing Chen, Li Dong, Wei Lu, Zhifang Sui, Benyou Wang, Wai Lam, and Furu Wei.
Synthetic data (almost) from scratch: Generalized instruction tuning for language models. CoRR,
[abs/2402.13064, 2024b. doi: 10.48550/ARXIV.2402.13064. URL https://doi.org/10.](https://doi.org/10.48550/arXiv.2402.13064)
[48550/arXiv.2402.13064.](https://doi.org/10.48550/arXiv.2402.13064)
Jia Li, Edward Beeching, Lewis Tunstall, Ben Lipkin, Roman Soletskyi, Shengyi Costa Huang,
Kashif Rasul, Longhui Yu, Albert Jiang, Ziju Shen, Zihan Qin, Bin Dong, Li Zhou, Yann
Fleureau, Guillaume Lample, and Stanislas Polu. Numinamath. GitHub repository, 2024c.
Ming Li, Lichang Chen, Jiuhai Chen, Shwai He, Jiuxiang Gu, and Tianyi Zhou. Selective reflectiontuning: Student-selected data recycling for LLM instruction-tuning. In Lun-Wei Ku, Andre Martins, and Vivek Srikumar (eds.), Findings of the Association for Computational Linguistics, ACL
_2024, Bangkok, Thailand and virtual meeting, August 11-16, 2024, pp. 16189–16211. Associa-_
tion for Computational Linguistics, 2024d. doi: 10.18653/V1/2024.FINDINGS-ACL.958. URL
[https://doi.org/10.18653/v1/2024.findings-acl.958.](https://doi.org/10.18653/v1/2024.findings-acl.958)
Xian Li, Ping Yu, Chunting Zhou, Timo Schick, Omer Levy, Luke Zettlemoyer, Jason Weston,
and Mike Lewis. Self-alignment with instruction backtranslation. In The Twelfth International
_Conference on Learning Representations, ICLR 2024, Vienna, Austria, May 7-11, 2024. OpenRe-_
[view.net, 2024e. URL https://openreview.net/forum?id=1oijHJBRsT.](https://openreview.net/forum?id=1oijHJBRsT)
-----
**648**
**649**
**650**
**651**
**652**
**653**
**654**
**655**
**656**
**657**
**658**
**659**
**660**
**661**
**662**
**663**
**664**
**665**
**666**
**667**
**668**
**669**
**670**
**671**
**672**
**673**
**674**
**675**
**676**
**677**
**678**
**679**
**680**
**681**
**682**
**683**
**684**
**685**
**686**
**687**
**688**
**689**
**690**
**691**
**692**
**693**
**694**
**695**
**696**
**697**
**698**
**699**
**700**
**701**
Yujun Lin, Song Han, Huizi Mao, Yu Wang, and Bill Dally. Deep gradient compression: Reducing
the communication bandwidth for distributed training. In 6th International Conference on Learn_ing Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference_
_[Track Proceedings. OpenReview.net, 2018. URL https://openreview.net/forum?id=](https://openreview.net/forum?id=SkhQHMW0W)_
[SkhQHMW0W.](https://openreview.net/forum?id=SkhQHMW0W)
Liangxin Liu, Xuebo Liu, Derek F. Wong, Dongfang Li, Ziyi Wang, Baotian Hu, and Min
Zhang. Selectit: Selective instruction tuning for large language models via uncertainty-aware
self-reflection. _CoRR, abs/2402.16705, 2024a._ doi: 10.48550/ARXIV.2402.16705. URL
[https://doi.org/10.48550/arXiv.2402.16705.](https://doi.org/10.48550/arXiv.2402.16705)
Wei Liu, Weihao Zeng, Keqing He, Yong Jiang, and Junxian He. What makes good data for
alignment? A comprehensive study of automatic data selection in instruction tuning. In The
_Twelfth International Conference on Learning Representations, ICLR 2024, Vienna, Austria,_
_[May 7-11, 2024. OpenReview.net, 2024b. URL https://openreview.net/forum?id=](https://openreview.net/forum?id=BTKAeLqLMw)_
[BTKAeLqLMw.](https://openreview.net/forum?id=BTKAeLqLMw)
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike
Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized BERT pretraining
[approach. CoRR, abs/1907.11692, 2019. URL http://arxiv.org/abs/1907.11692.](http://arxiv.org/abs/1907.11692)
Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In 7th International
_Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019._
[OpenReview.net, 2019. URL https://openreview.net/forum?id=Bkg6RiCqY7.](https://openreview.net/forum?id=Bkg6RiCqY7)
Pan Lu, Baolin Peng, Hao Cheng, Michel Galley, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu,
and Jianfeng Gao. Chameleon: Plug-and-play compositional reasoning with large language models. Advances in Neural Information Processing Systems, 36, 2024a.
Zimu Lu, Aojun Zhou, Houxing Ren, Ke Wang, Weikang Shi, Junting Pan, Mingjie Zhan, and
Hongsheng Li. Mathgenie: Generating synthetic data with question back-translation for enhancing mathematical reasoning of llms. In Lun-Wei Ku, Andre Martins, and Vivek Srikumar (eds.),
_Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Vol-_
_ume 1: Long Papers), ACL 2024, Bangkok, Thailand, August 11-16, 2024, pp. 2732–2747. As-_
sociation for Computational Linguistics, 2024b. doi: 10.18653/V1/2024.ACL-LONG.151. URL
[https://doi.org/10.18653/v1/2024.acl-long.151.](https://doi.org/10.18653/v1/2024.acl-long.151)
Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng, Qingwei Lin, Shifeng Chen, and Dongmei Zhang. Wizardmath: Empowering mathematical reasoning
for large language models via reinforced evol-instruct. CoRR, abs/2308.09583, 2023. doi: 10.
[48550/ARXIV.2308.09583. URL https://doi.org/10.48550/arXiv.2308.09583.](https://doi.org/10.48550/arXiv.2308.09583)
Niklas Muennighoff, Alexander M. Rush, Boaz Barak, Teven Le Scao, Nouamane Tazi, Aleksandra
Piktus, Sampo Pyysalo, Thomas Wolf, and Colin A. Raffel. Scaling data-constrained language
models. In Alice Oh, Tristan Naumann, Amir Globerson, Kate Saenko, Moritz Hardt, and Sergey
Levine (eds.), Advances in Neural Information Processing Systems 36: Annual Conference on
_Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December_
_[10 - 16, 2023, 2023. URL http://papers.nips.cc/paper_files/paper/2023/](http://papers.nips.cc/paper_files/paper/2023/hash/9d89448b63ce1e2e8dc7af72c984c196-Abstract-Conference.html)_
[hash/9d89448b63ce1e2e8dc7af72c984c196-Abstract-Conference.html.](http://papers.nips.cc/paper_files/paper/2023/hash/9d89448b63ce1e2e8dc7af72c984c196-Abstract-Conference.html)
Samyam Rajbhandari, Jeff Rasley, Olatunji Ruwase, and Yuxiong He. Zero: Memory optimizations
toward training trillion parameter models. In SC20: International Conference for High Perfor_mance Computing, Networking, Storage and Analysis, pp. 1–16. IEEE, 2020._
Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Mingchuan Zhang, Y. K. Li,
Y. Wu, and Daya Guo. Deepseekmath: Pushing the limits of mathematical reasoning in open
language models. _CoRR, abs/2402.03300, 2024._ doi: 10.48550/ARXIV.2402.03300. URL
[https://doi.org/10.48550/arXiv.2402.03300.](https://doi.org/10.48550/arXiv.2402.03300)
Zhengyang Tang, Xingxing Zhang, Benyou Wang, and Furu Wei. Mathscale: Scaling instruction
tuning for mathematical reasoning. In Forty-first International Conference on Machine Learn_ing, ICML 2024, Vienna, Austria, July 21-27, 2024. OpenReview.net, 2024._ [URL https:](https://openreview.net/forum?id=Kjww7ZN47M)
[//openreview.net/forum?id=Kjww7ZN47M.](https://openreview.net/forum?id=Kjww7ZN47M)
-----
**702**
**703**
**704**
**705**
**706**
**707**
**708**
**709**
**710**
**711**
**712**
**713**
**714**
**715**
**716**
**717**
**718**
**719**
**720**
**721**
**722**
**723**
**724**
**725**
**726**
**727**
**728**
**729**
**730**
**731**
**732**
**733**
**734**
**735**
**736**
**737**
**738**
**739**
**740**
**741**
**742**
**743**
**744**
**745**
**746**
**747**
**748**
**749**
**750**
**751**
**752**
**753**
**754**
**755**
Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy
Liang, and Tatsunori B Hashimoto. Stanford alpaca: An instruction-following llama model, 2023.
Mistral AI Team. Mathstral model card. 2024. [URL https://mistral.ai/news/](https://mistral.ai/news/mathstral/)
[mathstral/.](https://mistral.ai/news/mathstral/)
Yuxuan Tong, Xiwen Zhang, Rui Wang, Ruidong Wu, and Junxian He. Dart-math: Difficulty-aware
rejection tuning for mathematical problem-solving. CoRR, abs/2407.13690, 2024. doi: 10.48550/
[ARXIV.2407.13690. URL https://doi.org/10.48550/arXiv.2407.13690.](https://doi.org/10.48550/arXiv.2407.13690)
Ante Wang, Linfeng Song, Ye Tian, Baolin Peng, Lifeng Jin, Haitao Mi, Jinsong Su, and Dong Yu.
Self-consistency boosts calibration for math reasoning. CoRR, abs/2403.09849, 2024a. doi: 10.
[48550/ARXIV.2403.09849. URL https://doi.org/10.48550/arXiv.2403.09849.](https://doi.org/10.48550/arXiv.2403.09849)
Ke Wang, Houxing Ren, Aojun Zhou, Zimu Lu, Sichun Luo, Weikang Shi, Renrui Zhang, Linqi
Song, Mingjie Zhan, and Hongsheng Li. Mathcoder: Seamless code integration in llms for
enhanced mathematical reasoning. In The Twelfth International Conference on Learning Rep_resentations, ICLR 2024, Vienna, Austria, May 7-11, 2024. OpenReview.net, 2024b._ URL
[https://openreview.net/forum?id=z8TW0ttBPp.](https://openreview.net/forum?id=z8TW0ttBPp)
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V. Le, Ed H. Chi, Sharan Narang, Aakanksha
Chowdhery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language
models. In The Eleventh International Conference on Learning Representations, ICLR 2023, Ki_[gali, Rwanda, May 1-5, 2023. OpenReview.net, 2023a. URL https://openreview.net/](https://openreview.net/forum?id=1PL1NIMMrw)_
[forum?id=1PL1NIMMrw.](https://openreview.net/forum?id=1PL1NIMMrw)
Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, and
Hannaneh Hajishirzi. Self-instruct: Aligning language models with self-generated instructions. In
Anna Rogers, Jordan L. Boyd-Graber, and Naoaki Okazaki (eds.), Proceedings of the 61st Annual
_Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2023,_
_Toronto, Canada, July 9-14, 2023, pp. 13484–13508. Association for Computational Linguistics,_
[2023b. doi: 10.18653/V1/2023.ACL-LONG.754. URL https://doi.org/10.18653/](https://doi.org/10.18653/v1/2023.acl-long.754)
[v1/2023.acl-long.754.](https://doi.org/10.18653/v1/2023.acl-long.754)
Zifeng Wang, Chun-Liang Li, Vincent Perot, Long T. Le, Jin Miao, Zizhao Zhang, Chen-Yu Lee,
and Tomas Pfister. Codeclm: Aligning language models with tailored synthetic data. CoRR,
[abs/2404.05875, 2024c. doi: 10.48550/ARXIV.2404.05875. URL https://doi.org/10.](https://doi.org/10.48550/arXiv.2404.05875)
[48550/arXiv.2404.05875.](https://doi.org/10.48550/arXiv.2404.05875)
Mengzhou Xia, Sadhika Malladi, Suchin Gururangan, Sanjeev Arora, and Danqi Chen. LESS:
selecting influential data for targeted instruction tuning. In Forty-first International Conference
_on Machine Learning, ICML 2024, Vienna, Austria, July 21-27, 2024. OpenReview.net, 2024._
[URL https://openreview.net/forum?id=PG5fV50maR.](https://openreview.net/forum?id=PG5fV50maR)
Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, Qingwei Lin, and Daxin Jiang. Wizardlm: Empowering large pre-trained language models to follow complex instructions. In The Twelfth International Conference on Learning Representa_tions, ICLR 2024, Vienna, Austria, May 7-11, 2024. OpenReview.net, 2024._ [URL https:](https://openreview.net/forum?id=CfXh93NDgH)
[//openreview.net/forum?id=CfXh93NDgH.](https://openreview.net/forum?id=CfXh93NDgH)
An Yang, Baosong Yang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Zhou, Chengpeng Li,
Chengyuan Li, Dayiheng Liu, Fei Huang, et al. Qwen2 technical report. _arXiv preprint_
_arXiv:2407.10671, 2024._
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T Kwok, Zhenguo Li, Adrian Weller, and Weiyang Liu. Metamath: Bootstrap your own mathematical questions
for large language models. arXiv preprint arXiv:2309.12284, 2023.
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok,
Zhenguo Li, Adrian Weller, and Weiyang Liu. Metamath: Bootstrap your own mathematical
questions for large language models. In The Twelfth International Conference on Learning
_Representations, ICLR 2024, Vienna, Austria, May 7-11, 2024. OpenReview.net, 2024. URL_
[https://openreview.net/forum?id=N8N0hgNDRt.](https://openreview.net/forum?id=N8N0hgNDRt)
-----
**756**
**757**
**758**
**759**
**760**
**761**
**762**
**763**
**764**
**765**
**766**
**767**
**768**
**769**
**770**
**771**
**772**
**773**
**774**
**775**
**776**
**777**
**778**
**779**
**780**
**781**
**782**
**783**
**784**
**785**
**786**
**787**
**788**
**789**
**790**
**791**
**792**
**793**
**794**
**795**
**796**
**797**
**798**
**799**
**800**
**801**
**802**
**803**
**804**
**805**
**806**
**807**
**808**
**809**
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, and Wenhu
Chen. Mammoth: Building math generalist models through hybrid instruction tuning. In The
_Twelfth International Conference on Learning Representations, ICLR 2024, Vienna, Austria,_
_[May 7-11, 2024. OpenReview.net, 2024a. URL https://openreview.net/forum?id=](https://openreview.net/forum?id=yLClGs770I)_
[yLClGs770I.](https://openreview.net/forum?id=yLClGs770I)
Xiang Yue, Tuney Zheng, Ge Zhang, and Wenhu Chen. Mammoth2: Scaling instructions from
[the web. CoRR, abs/2405.03548, 2024b. doi: 10.48550/ARXIV.2405.03548. URL https:](https://doi.org/10.48550/arXiv.2405.03548)
[//doi.org/10.48550/arXiv.2405.03548.](https://doi.org/10.48550/arXiv.2405.03548)
Biao Zhang, Zhongtao Liu, Colin Cherry, and Orhan Firat. When scaling meets LLM finetuning:
The effect of data, model and finetuning method. In The Twelfth International Conference on
_Learning Representations, ICLR 2024, Vienna, Austria, May 7-11, 2024. OpenReview.net, 2024._
[URL https://openreview.net/forum?id=5HCnKDeTws.](https://openreview.net/forum?id=5HCnKDeTws)
Yaowei Zheng, Richong Zhang, Junhao Zhang, Yanhan Ye, Zheyan Luo, Zhangchi Feng, and
Yongqiang Ma. Llamafactory: Unified efficient fine-tuning of 100+ language models. In Pro_ceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume_
_3: System Demonstrations), Bangkok, Thailand, 2024. Association for Computational Linguis-_
[tics. URL http://arxiv.org/abs/2403.13372.](http://arxiv.org/abs/2403.13372)
Aojun Zhou, Ke Wang, Zimu Lu, Weikang Shi, Sichun Luo, Zipeng Qin, Shaoqing Lu, Anya Jia,
Linqi Song, Mingjie Zhan, and Hongsheng Li. Solving challenging math word problems using
GPT-4 code interpreter with code-based self-verification. In The Twelfth International Conference
_on Learning Representations, ICLR 2024, Vienna, Austria, May 7-11, 2024. OpenReview.net,_
[2024. URL https://openreview.net/forum?id=c8McWs4Av0.](https://openreview.net/forum?id=c8McWs4Av0)
Qihao Zhu, Daya Guo, Zhihong Shao, Dejian Yang, Peiyi Wang, Runxin Xu, Y Wu, Yukun Li,
Huazuo Gao, Shirong Ma, et al. Deepseek-coder-v2: Breaking the barrier of closed-source models
in code intelligence. arXiv preprint arXiv:2406.11931, 2024.
-----
**810**
**811**
**812**
**813**
**814**
**815**
**816**
**817**
**818**
**819**
**820**
**821**
**822**
**823**
**824**
**825**
**826**
**827**
**828**
**829**
**830**
**831**
**832**
**833**
**834**
**835**
**836**
**837**
**838**
**839**
**840**
**841**
**842**
**843**
**844**
**845**
**846**
**847**
**848**
**849**
**850**
**851**
**852**
**853**
**854**
**855**
**856**
**857**
**858**
**859**
**860**
**861**
**862**
**863**
A EXPERIMENTAL SETUP
A.1 TRAINING SETUP
We employ Llama factory (Zheng et al. (2024))
for fine-tuning the entire suite of models and
utilized sequence packing Krell et al. (2021) to
accelerate the training process. To accommodate the large model sizes during training, we
leveraged DeepSpeed with ZeRO (Rajbhandari
et al. (2020)) Stage 3. For data preprocessing,
we applied the Alpaca prompt format (Taori
et al. (2023)).
Table 7: The composition and sources of data for
the proposed WISDOM dataset.
**Dataset** **Pairs** **Dataset Source**
**Weak Teacher Guiding** 640K DeepSeek Coder V2
**Critical Expert Teaching** 527K GPT-4o-0513
**Experts Consistency Voting** 300K GPT-4o-0513
The training was conducted using 88 NVIDIA A800 GPUs, with a configuration of batch size 1,
gradient accumulation (Lin et al. (2018)) of 2, sequence length of 8192, and bf16 (Kalamkar et al.
(2019)) precision. We optimized the models with the AdamW (Loshchilov & Hutter (2019)) optimizer, setting a learning rate warmup using a cosine schedule with a warmup ratio of 0.03, and
trained each model for 3 epochs. The learning rates were adjusted slightly for different models:
Mistral 7B at 1e-5, DeepSeekMath-7B at 5e-5, Llama3-8B at 4e-5, and both Llama3-70B and
Qwen2-72B at 2e-5.
All final results for the models were obtained using the full dataset. The specific composition of the
dataset can be found in Table 7.
A.2 DATA CONTAMINATION
To mitigate the risk of data contamination, we applied a 10-gram hash deduplication method (Liu
et al. (2019)) to the questions in both our in-domain and out-of-domain benchmarks, with a condition
that the ratio of the longest common sequence must exceed 0.6 (Yang et al. (2024)), Any detected
duplicates were removed. According to our statistics, after deduplication, the remaining samples
were as follows: from 641,514 to 640,987 in Weak Teacher Guiding, from 527,658 to 527,537 in
Critical Expert Teaching, and from 298,190 to 298,118 in Experts Consistency Voting Stage. All
experiments were conducted on these deduplicated datasets to prevent potential data contamination.
A.3 DETAIL DATASETS
The following provides a detailed description of the composition of the evaluation set.
_• GSM8K: The test dataset consists of 1,319 high-quality grade school mathematics problems, pri-_
marily to evaluate fundamental logical reasoning and applied mathematical abilities.
_• MATH: 5,000 curated high school competition-level test problems, including diverse dimensions_
like Prealgebra, Algebra, Number Theory, Counting and Probability, Geometry, Intermediate Algebra, and Precalculus.
_• College MATH: A total of 2,818 college-level mathematics problems were extracted from nine_
textbooks covering seven domains. Mathematical reasoning ability can be assessed from multiple
skill perspectives, such as analytical thinking, logical reasoning, and quantitative analysis.
_• OlympiadBench-Math: The text-only English subset of Olympiad-Bench, consisting of 675_
Olympiad-level mathematical problems, is designed to evaluate complex and advanced mathematical reasoning abilities.
_• TabMWP: A large-scale dataset for math word problems in tabular contexts.The test dataset in-_
cludes 1,000 questions with tabular contexts, allowing for a comprehensive evaluation of mathematical reasoning within the context of tables.
_• AMC-AIME: The datasets are designed to select students who will represent the United States at_
the International Mathematics Olympiad (IMO). The datasets include 30 competition-level problems from the AIME2024 and 4o from the AMC2023, covering a broad spectrum of problem
-----
**864**
**865**
**866**
**867**
**868**
**869**
**870**
**871**
**872**
**873**
**874**
**875**
**876**
**877**
**878**
**879**
**880**
**881**
**882**
**883**
**884**
**885**
**886**
**887**
**888**
**889**
**890**
**891**
**892**
**893**
**894**
**895**
**896**
**897**
**898**
**899**
**900**
**901**
**902**
**903**
**904**
**905**
**906**
**907**
**908**
**909**
**910**
**911**
**912**
**913**
**914**
**915**
**916**
**917**
solving skills such as arithmetic, algebra, combinatorics, geometry, number theory, and probability.
B PROMPTS
B.1 META-INFORMATION GENERATION PROMPT
**Example B.1: Meta-information Generation Prompt**
As a math mentor, you are dedicated to helping your students understand and master key math-
ematical concepts and problem-solving techniques. Your goal is to provide clear, concise guid-
ance and support. When guiding, you must not give out the answer to the original problem.
## Skill Label
Consider the following mathematical question. Label this question with a specific mathematical
skill required to solve it. The skill name should:
- Be in lowercase letters only.
- Be very descriptive.
- Use multiple words joined by an underscore if necessary.
- Enclose the content within <skill> and </skill> tags.
## Topic Label
Consider the following mathematical question. Label this question with the specific mathemati-
cal topic it belongs to. The topic name should:
- Be in lowercase letters only.
- Be specific and descriptive.
- Use underscores to join multiple words if necessary.
- Enclose the content within <topic> and </topic> tags.
## Knowledge Base
Provide comprehensive information necessary for understanding the mathematical concepts re-
lated to the given problem, without including step-by-step procedures or any information that
could directly solve the problem. Include the following:
- Key definitions and explanations.
- General relevant formulas and equations (without applying them to the specific prob-
lem).
- Common pitfalls and misconceptions.
- Additional theoretical insights related to the topic.
- Do not include any visual or diagram-related knowledge.
- Enclose the content within <knowledge base> and </knowledge base> tags.
## Similar Problem Types
Provide up to two examples and solutions of similar problem types to help students recognize
patterns and apply similar problem-solving methods. For each example:
- State the problem.
- Provide a detailed solution.
- Highlight the similarities to the original question.
- Explain how the solution method can be applied to the original question.
- Do not include any visual or diagram-related knowledge.
- Enclose the content within <similar problems> and </similar problems>
tags.
## Question
_{question}_
-----
**918**
**919**
**920**
**921**
**922**
**923**
**924**
**925**
**926**
**927**
**928**
**929**
**930**
**931**
**932**
**933**
**934**
**935**
**936**
**937**
**938**
**939**
**940**
**941**
**942**
**943**
**944**
**945**
**946**
**947**
**948**
**949**
**950**
**951**
**952**
**953**
**954**
**955**
**956**
**957**
**958**
**959**
**960**
**961**
**962**
**963**
**964**
**965**
**966**
**967**
**968**
**969**
**970**
**971**
B.2 QUESTION GENERATION PROMPT
**Example B.2: Question Generation Prompt**
**Role: You are a creative math professor.**
**Objective: Help senior students learn the following math key skills and topics, create high**
quality math word problems to help students learn math.
**Task:**
1. Using the listed key skills and topics as guidelines, construct multiple, original math
problems. Each problem should be written on a new line.
2. Ensure each problem has a fixed and unique answer.
3. Increase the difficulty of the problems by incorporating foundational knowledge and
common pitfalls.
4. Problems can be generated using a single topic and skill or by combining multiple
topics and skills for higher quality questions.
5. Reference the given problems and maximize the combination of topics and skills to
rewrite and deepen the difficulty, ensuring no conflicts between topics and skills.
6. Based on the example problem, imagine what specific field of study would delve into
such data and generate the problem as if it were created by someone in that field.
7. Ensure that the generated questions are solvable and not inherently unsolvable.
8. Each generated question must be a single question without any sub-questions.
9. Ensure that the generated questions are as quantitative as possible, focusing on prob-
lems that can have numerical solutions.
**Instructions:**
- Write each new math problem on a new line.
- Use <question> to indicate the beginning of the question.
- Use </question> to indicate the end of the question.
**Topics and skills:**
- Skills: {skills}
- Topics: {topics}
**Knowledge base:**
_{knowledge base}_
**Easy example problems:**
_{problems}_
**Expanded Requirements:**
1. Generate up to eight questions per response, with each question on a new line.
2. Each generated question must have a fixed and unique answer.
3. Increase the difficulty of the questions using foundational knowledge and common pit-
falls.
4. Use a single topic and skill or combine multiple topics and skills to create higher quality
questions.
5. Reference the given questions and maximize the combination of topics and skills to
rewrite and deepen the difficulty, ensuring no conflicts between topics and skills.
6. Each generated question must be a single question without any sub-questions.
7. Ensure that the generated questions are as quantitative as possible, focusing on prob-
lems that can have numerical solutions.
-----
**972**
**973**
**974**
**975**
**976**
**977**
**978**
**979**
**980**
**981**
**982**
**983**
**984**
**985**
**986**
**987**
**988**
**989**
**990**
**991**
**992**
**993**
**994**
**995**
**996**
**997**
**998**
**999**
**1000**
**1001**
**1002**
**1003**
**1004**
**1005**
**1006**
**1007**
**1008**
**1009**
**1010**
**1011**
**1012**
**1013**
**1014**
**1015**
**1016**
**1017**
**1018**
**1019**
**1020**
**1021**
**1022**
**1023**
**1024**
**1025**
B.3 CRITICAL EXPERT TEACHING PROMPT
**Example B.3: Critical Expert Teaching Prompt**
**Task: Solve a Complex Math Problem with Step-by-Step Explanation and Python Code**
**Instructions:**
1. Read the math problem carefully.
2. Compare the reference solution and code output, identify dis-
crepancies, and analyze the reasons for these discrepancies in
the section between <reference solution analysis> and
</reference solution analysis>.
3. Think through the correct solution step-by-step.
4. At the end of your step-by-step solution, provide the final answer in the format:
boxed{final result}.
5. After presenting the final answer, write a Python code that demonstrates the solution
process.
6. Ensure the final output of the code is stored in a variable named result.
7. Provide your final solution in the section between <solution> and </solution>.
BACKGROUND
I am working on a complex math problem and need your assistance. I have an incorrect reference
solution and a result obtained from running a code. Please help me understand the discrepancies
and find the correct solution.
REFERENCE SOLUTION
_{reference solution}_
CODE OUTPUT
_{code output}_
PROBLEM
_{question}_
ANALYSIS
<reference solution analysis>
Please analyze the discrepancies between the reference solution and the code output here.
</reference solution analysis>
SOLUTION
<solution>
Please provide a step-by-step solution here, including the final answer in the format:
boxed{final result}.
After presenting the final answer, write a Python code that demonstrates the solution process.
Make sure the final answer in the format boxed{final result} is presented before the
Python code.
Ensure the final output of the code is stored in a variable named result.
</solution>
-----
**1026**
**1027**
**1028**
**1029**
**1030**
**1031**
**1032**
**1033**
**1034**
**1035**
**1036**
**1037**
**1038**
**1039**
**1040**
**1041**
**1042**
**1043**
**1044**
**1045**
**1046**
**1047**
**1048**
**1049**
**1050**
**1051**
**1052**
**1053**
**1054**
**1055**
**1056**
**1057**
**1058**
**1059**
**1060**
**1061**
**1062**
**1063**
**1064**
**1065**
**1066**
**1067**
**1068**
**1069**
**1070**
**1071**
**1072**
**1073**
**1074**
**1075**
**1076**
**1077**
**1078**
**1079**
B.4 WEAK TEACHER GUIDING PROMPT
**Example B.4: Weak Teacher Guiding Prompt**
You are a math professor and good at writing python code.
**Task: Solve a Complex Math Problem with Step-by-Step Explanation and Python Code**
**Instructions:**
1. Read the math problem carefully.
2. Think through the solution step-by-step.
3. At the end of your step-by-step solution, provide the final answer in the format:
boxed{final result}.
4. After presenting the final answer, write a Python code that demonstrates the solution
process.
5. The generated code should not include any visualization libraries such as matplotlib.
**Problem: {question}**
B.5 EXPERTS CONSISTENCY VOTING PROMPT
**Example B.5: Experts Consistency Voting Prompt**
**Problem:**
_{question}_
Answer the problem step by step, ensuring that each step is clearly explained and logically
connected to the next. While the solution process should be detailed and thorough, ensure that
the final answer is formatted as \boxed{final_result}.
C DETAILED SCALING ACCURACY
Figure 7: The accuracy results from left to right are for GSM8K, College Math, and Olympiad,
respectively.
Figure 8: The accuracy results from left to right are for TabMWP, TheoremQA,respectively.
-----
**1080**
**1081**
**1082**
**1083**
**1084**
**1085**
**1086**
**1087**
**1088**
**1089**
**1090**
**1091**
**1092**
**1093**
**1094**
**1095**
**1096**
**1097**
**1098**
**1099**
**1100**
**1101**
**1102**
**1103**
**1104**
**1105**
**1106**
**1107**
**1108**
**1109**
**1110**
**1111**
**1112**
**1113**
**1114**
**1115**
**1116**
**1117**
**1118**
**1119**
**1120**
**1121**
**1122**
**1123**
**1124**
**1125**
**1126**
**1127**
**1128**
**1129**
**1130**
**1131**
**1132**
**1133**
DETAILED SAMPLES
**Example D.1: Weak Teacher Guiding Sample**
**Question: In a certain chemical reaction, the ratio of reactant A to reactant B must be 3:7. If**
you have 21 grams of reactant B, how many grams of reactant A do you need?
**Response:**
<knowledge_base_start>
### Key Definitions and Explanations
- **Ratio**: A ratio compares two quantities, often expressed as 3:7, meaning that for every 3
parts of one substance, there are 7 parts of another.
- **Proportion**: A proportion is an equation that shows two ratios are equal. Proportions help
solve for unknown values when part of the ratio is given.
### General Relevant Formulas and Equations
- **Proportion Formula**: If two ratios are equal, they can be set up as a proportion: _[a]b_ [=][ c]d [,]
where a and b are the known quantities, and c or d is the unknown to be solved.
### Common Pitfalls and Misconceptions
- **Reversing the Ratio**: Be sure to follow the correct order in ratios. For example, the ratio
A:B is not the same as B:A.
- **Incorrect Proportional Set-Up**: Misinterpreting how to set up the proportion can lead to
incorrect solutions. It’s important to align the ratio and the actual quantities properly.
### Additional Theoretical Insights
- **Application of Ratios in Chemistry**: Ratios and proportions are critical in chemistry when
mixing reactants in fixed proportions to ensure reactions occur as intended.
- **Scaling Proportions**: In practical scenarios, such as scaling up chemical reactions, under-
standing proportions is key to calculating the correct amounts of reactants.
</knowledge_base_end>
<solution_start>
### Step-by-Step Solution:
1. **Understand the ratio**: The ratio of reactant A to reactant B is given as 3:7. This means
for every 3 parts of reactant A, there are 7 parts of reactant B.
2. **Set up the proportion**: Let x be the amount of reactant A needed. Using the ratio, we set
up the proportion:
_x_
21 [= 3]7
where 21 is the given amount of reactant B.
3. **Solve for x**: Cross-multiply to solve for x:
7x = 3 × 21
4. **Calculate the right-hand side**: Perform the multiplication:
3 × 21 = 63
5. **Solve for x**: Divide both sides by 7:
7x = 63
_x = [63]_
7 [= 9]
### Final Answer:
9
Thus, you need 9 grams of reactant A.
</solution_end>
-----
**1134**
**1135**
**1136**
**1137**
**1138**
**1139**
**1140**
**1141**
**1142**
**1143**
**1144**
**1145**
**1146**
**1147**
**1148**
**1149**
**1150**
**1151**
**1152**
**1153**
**1154**
**1155**
**1156**
**1157**
**1158**
**1159**
**1160**
**1161**
**1162**
**1163**
**1164**
**1165**
**1166**
**1167**
**1168**
**1169**
**1170**
**1171**
**1172**
**1173**
**1174**
**1175**
**1176**
**1177**
**1178**
**1179**
**1180**
**1181**
**1182**
**1183**
**1184**
**1185**
**1186**
**1187**
**Example D.2: Critical Expert Teaching Sample**
**Question: Find the values of x and y that satisfy the system of equations x[2]** + y[2] = 10 and
_x[2]_ _−_ _y[2]_ = 6.
**Response:**
<knowledge_base_start>
### Key Definitions and Explanations
- **System of Equations**: A set of equations with the same variables, solved simultaneously.
- **Substitution and Elimination Methods**: Substitution solves one equation for one variable
and substitutes it into the other equation. Elimination adds or subtracts equations to eliminate
one variable.
- **Algebraic Operations**: Basic operations (addition, subtraction, multiplication, division)
used to manipulate and solve equations.
### General Relevant Formulas and Equations
- For two equations like x[2] + y[2] = 10 and x[2] _−_ _y[2]_ = 6, adding or subtracting them simplifies
the system and isolates a variable.
### Common Pitfalls and Misconceptions
- **Misinterpreting Operations**: Errors in tracking signs during addition/subtraction.
- **Square Root Ambiguities**: Both positive and negative roots must be considered.
### Additional Theoretical Insights
- **Graphical Interpretation**: Each equation represents a curve, and solutions are where these
curves intersect.
</knowledge_base_end>
<solution_start>
Let’s solve the given system of equations step-by-step:
1. **Understand the Problem**: We are given a system of two equations:
_x2 + y2 = 10_
_x[2]_ _y[2]_ = 6
_−_
2. **Add the Two Equations**: Adding the two equations, we get:
(x[2] + y[2]) + (x[2] _−_ _y[2]) = 10 + 6_
Simplifying this, we obtain:
2x[2] = 16
Dividing both sides by 2, we get:
_x[2]_ = 8
Taking the square root of both sides, we find:
_x = ±√8 = ±2√2_
3. **Substitute x[2] Back into One of the Original Equations**: Substitute x[2] = 8 into the first
equation x[2] + y[2] = 10:
8 + y[2] = 10
Subtracting 8 from both sides, we get:
_y[2]_ = 2
Taking the square root of both sides, we find:
_y = ±√2_
### Final Answer:
_x = 2√2,_ _y =_ _√2_
_x = 2√2,_ _y =_ _√2_
_−_
x = −2√2, _y =_ _√2_
_x = −2√2,_ _y = −√2_
-----
**1188**
**1189**
**1190**
**1191**
**1192**
**1193**
**1194**
**1195**
**1196**
**1197**
**1198**
**1199**
**1200**
**1201**
**1202**
**1203**
**1204**
**1205**
**1206**
**1207**
**1208**
**1209**
**1210**
**1211**
**1212**
**1213**
**1214**
**1215**
**1216**
**1217**
**1218**
**1219**
**1220**
**1221**
**1222**
**1223**
**1224**
**1225**
**1226**
**1227**
**1228**
**1229**
**1230**
**1231**
**1232**
**1233**
**1234**
**1235**
**1236**
**1237**
**1238**
**1239**
**1240**
**1241**
**Example D.3: Experts Consistency Voting Sample**
**Question: Given vectors a = ⟨1, 2, −1⟩** and b = ⟨3, −1, 2⟩, find a unit vector c such that
**a · c = b · c = 0. Express your answer in terms of i, j, and k unit vectors.**
**Response:**
<knowledge_base_start>
### Key Definitions and Explanations
- **Dot Product**: The dot product of two vectors u and v, denoted u **v, is u1v1** +u2v2 +u3v3.
_·_
For vectors to be orthogonal, their dot product must be zero.
- **Unit Vector**: A vector with magnitude 1, indicating direction without scale.
- **Orthogonal Vectors**: Vectors are orthogonal if their dot product equals zero.
### General Relevant Formulas and Equations - **Dot Product Formula**:
**u · v = u1v1 + u2v2 + u3v3**
- **Magnitude of a Vector**: **u** = _u[2]1_ [+][ u]2[2] [+][ u]3[2]
_|_ _|_ **u**
- **Unit Vector Formula**: uunit = **u** [for unit vector in the direction of][ u]
p| _|_
### Common Pitfalls and Misconceptions
- **Orthogonality Condition**: Ensure that the orthogonal vector’s dot product with given vectors is zero.
- **Normalization**: Always normalize the vector to confirm it is a unit vector.
- **Sign Precision**: Pay attention to signs when solving for direction.
### Additional Theoretical Insights
- **Cross Product**: In 3D space, the cross product can find a vector orthogonal to two others.
</knowledge_base_end>
<solution_start>
To find a unit vector c such that a·c = 0 and b·c = 0, we need to find a vector that is orthogonal
to both a and b. This can be achieved by taking the cross product of a and b. Given:
**a = ⟨1, 2, −1⟩,** **b = ⟨3, −1, 2⟩**
First, compute the cross product a × b:
2 _−1_
_−1_ 2
**a × b =**
Expanding the determinant:
**a × b = i (2 · 2 −** (−1) · (−1)) − **j (1 · 2 −** (−1) · 3) + k (1 · (−1) − 2 · 3)
= i(4 − 1) − **j(2 + 3) + k(−1 −** 6)
= i(3) − **j(5) + k(−7)**
= 3i − 5j − 7k
So, the vector a × b is ⟨3, −5, −7⟩. Next, we need to convert this vector into a unit vector. To
do this, we find the magnitude of a × b:
_∥a × b∥_ = 3[2] + (−5)[2] + (−7)[2] = _√9 + 25 + 49 =_ _√83_
The unit vector c is then: p
1
**c =** _√83_ _[⟨][3][,][ −][5][,][ −][7][⟩]_
Expressing c in terms of i, j, and k:
**c =**
Thus, the final answer is:
_√83_ **[i][ −]**
3
83 **[i][ −]** _√_
_√83_ **[j][ −]**
5
83 **[j][ −]** _√_
_√83_ **[k]**
7
83 **[k]**
-----
**1242**
**1243**
**1244**
**1245**
**1246**
**1247**
**1248**
**1249**
**1250**
**1251**
**1252**
**1253**
**1254**
**1255**
**1256**
**1257**
**1258**
**1259**
**1260**
**1261**
**1262**
**1263**
**1264**
**1265**
**1266**
**1267**
**1268**
**1269**
**1270**
**1271**
**1272**
**1273**
**1274**
**1275**
**1276**
**1277**
**1278**
**1279**
**1280**
**1281**
**1282**
**1283**
**1284**
**1285**
**1286**
**1287**
**1288**
**1289**
**1290**
**1291**
**1292**
**1293**
**1294**
**1295**
**Example D.4: Response of CoT and PoT Sample**
**Question: In a clinical trial, the probability of a patient experiencing side effect A is 0.3, and**
the probability of experiencing side effect B is 0.4. If the probability of experiencing at least one
of these side effects is 0.58, what is the probability of experiencing both side effects?
**Response:**
### Step-by-Step Solution
1. Define the given probabilities:
- Probability of experiencing side effect A, P (A) = 0.3
- Probability of experiencing side effect B, P (B) = 0.4
- Probability of experiencing at least one of these side effects, P (A ∪ _B) = 0.58_
2. Use the principle of inclusion-exclusion for probabilities:
The probability of experiencing at least one of the side effects can be expressed as:
_P_ (A ∪ _B) = P_ (A) + P (B) − _P_ (A ∩ _B)_
where P (A ∩ _B) is the probability of experiencing both side effects._
3. Substitute the known values into the inclusion-exclusion formula:
0.58 = 0.3 + 0.4 − _P_ (A ∩ _B)_
4. Solve for P (A ∩ _B):_
0.58 = 0.7 − _P_ (A ∩ _B)_
_P_ (A ∩ _B) = 0.7 −_ 0.58
_P_ (A ∩ _B) = 0.12_
### Final Answer
0.12
’’’Python Code
# Given p r o b a b i l i t i e s
P A = 0.3
P B = 0.4
P A union B = 0.58
# C a l c u l a t e the p r o b a b i l i t y of e x p e r i e n c i n g both s i d e e f f e c t s
r e s u l t = P A + P B −P A union B
# P r i n t the r e s u l t
p r i n t ( f ”The p r o b a b i l i t y i s : _{ r e s u l t }_ ”)
’’’
This Python code calculates the probability of experiencing both side effects using the inclusionexclusion principle and prints the result.
CONSISTENCY ACCURACY ABLATION
Table 8: Accuracy with or without consistency.
**data** **S1 Consistency** **Model** **GSM8K** **Math** **TheoremQA** **College Math** **Olympiad** **AIME 2024** **AMC 2023** **TabMWP**
Seed+S1 ✗ DSMath-7B 76.1 52.6 31.0 35.7 20.9 0/30 11/40 80.1
Seed+S1 ✓ DSMath-7B 84.0(+7.9) 58.6(+6.0) 34.5(+3.5) 36.9(+1.2) 25.6(+4.7) 2/30(+6.7) 13/40(+5.0) 85.3(+5.2)
Seed+S1+S2+S3 ✗ DSMath-7B 77.8 56.7 32.4 36.9 24.0 1/30 10/40 80.0
Seed+S1+S2+S3 ✓ DSMath-7B 83.3(+5.5) 62.4(+5.7) 34.9(+2.5) 45.0(+8.1) 28.9(+4.9) 2/30(+3.3) 11/40(+2.5) 85.7(+5.7)
-----
| [] | 2024-10-04T00:00:00 | ICLR 2025 Submission | false | 0 | 0 | null | https://openreview.net/forum?id=hFFAg5Dmw9 | null | null |
What Are Step-Level Reward Models Rewarding? Counterintuitive Findings from MCTS-Boosted Mathematical Reasoning | Step-level reward models (SRMs) can significantly enhance mathematical reasoning performance through process supervision or step-level preference alignment based on reinforcement learning. The performance of SRMs is pivotal, as they serve as critical guidelines, ensuring that each step in the reasoning process is aligned with desired outcomes. Recently, AlphaZero-like methods, where Monte Carlo Tree Search (MCTS) is employed for automatic step-level preference annotation, have proven particularly effective. However, the precise mechanisms behind the success of SRMs remain largely unexplored. To address this gap, this study delves into the counterintuitive aspects of SRMs, particularly focusing on MCTS-based approaches. Our findings reveal that the removal of natural language descriptions of thought processes has minimal impact on the efficacy of SRMs. Furthermore, we demonstrate that SRMs are adept at assessing the complex logical coherence present in mathematical language while having difficulty in natural language. These insights provide a nuanced understanding of the core elements that drive effective step-level reward modeling in mathematical reasoning. By shedding light on these mechanisms, this study offers valuable guidance for developing more efficient and streamlined SRMs, which can be achieved by focusing on the crucial parts of mathematical reasoning. | The findings reveal that the removal of natural language descriptions of thought processes has minimal impact on the efficacy of SRMs, and it is demonstrated that SRMs are adept at assessing the complex logical coherence present in mathematical language while having difficulty in natural language. | ## What are Step-Level Reward Models Rewarding? Counterintuitive Findings from MCTS-boosted Mathematical Reasoning
#### Yiran Ma[1*], Zui Chen[2*], Tianqiao Liu[3], Mi Tian[3], Zhuo Liu[4], Zitao Liu[5†], Weiqi Luo[5],
1Zhejiang University, Hangzhou, China
2ShanghaiTech University, Shanghai, China
3TAL Education Group, Beijing, China
4University of Rochester, New York, USA
5Jinan University, Guangzhou, China
[email protected], [email protected], liutianqiao1, tianmi @tal.com, [email protected],
_{_ _}_
liuzitao, lwq @jnu.edu.cn
_{_ _}_
**Abstract**
**Math Problem:**
Step-level reward models (SRMs) can significantly enhance
mathematical reasoning performance through process supervision or step-level preference alignment based on reinforcement learning. The performance of SRMs is pivotal, as they
serve as critical guidelines, ensuring that each step in the reasoning process is aligned with desired outcomes. Recently,
AlphaZero-like methods, where Monte Carlo Tree Search
(MCTS) is employed for automatic step-level preference annotation, have proven particularly effective. However, the
precise mechanisms behind the success of SRMs remain
largely unexplored. To address this gap, this study delves into
the counterintuitive aspects of SRMs, particularly focusing
on MCTS-based approaches. Our findings reveal that the removal of natural language descriptions of thought processes
has minimal impact on the efficacy of SRMs. Furthermore,
we demonstrate that SRMs are adept at assessing the complex
logical coherence present in mathematical language while
having difficulty in natural language. These insights provide
a nuanced understanding of the core elements that drive effective step-level reward modeling in mathematical reasoning. By shedding light on these mechanisms, this study offers
valuable guidance for developing more efficient and streamlined SRMs, which can be achieved by focusing on the crucial
parts of mathematical reasoning.
#### Introduction
Large Language Models (LLMs) have demonstrated their
remarkable capabilities across a wide range of tasks, such
as information extraction, natural language understanding,
etc (Zhao et al. 2023), totally revolutionizing the deep learning community. Among these capabilities, reasoning stands
out as a critical area of focus, especially mathematical reasoning, which needs to be further improved due to its complex nature. Numerous studies have shown that multi-step
reasoning often facilitated through Chain-of-Thought (CoT)
*These authors contributed equally. Work was done during their
internships at TAL Education Group.
†Zitao Liu is the corresponding author.
Copyright © 2025, Association for the Advancement of Artificial
Intelligence (www.aaai.org). All rights reserved.
|Math Problem: There are 6 students playing tennis and twice that number playing volleyball. There are 16 boys and 22 girls playing soccer. If each student only participates in one group, how many students are there in total?|Col2|
|---|---|
|Thought: The problem involves calculating the total number of students by adding up the number of students in each group. Firstly, we need to find out how many students are playing volleyball. Since there are 6 students playing tennis, and it's stated that there are twice that number playing volleyball, we multiply 6 by 2.|St|
|||
|Math Expression: Students Playing Volleyball = 6 * 2 = 12||
|Thought: Now you can answer the problem in this step. We also have the number of boys and girls playing soccer, which is 16 and 22 respectively. To find the total number of students, we add up the number of students in all three groups.||
Figure 1: Each step in an LLM’s process of solving mathematical problems can be divided into the thought process
and the execution of corresponding calculations. We find
that natural language descriptions of the thought processes
are not essential for step-level reward modeling.
prompting, can significantly enhance model performance on
reasoning tasks (Zhou et al. 2023; Besta et al. 2024; Ding
et al. 2023; Yao et al. 2024; Wang et al. 2022; Wei et al.
2022; Zheng et al. 2024; Li et al. 2024; Zhan et al. 2024).
Recently, guided tree-search methods further improved
reasoning performance by exploring various reasoning paths
through online simulation to identify the optimal solution
paths (Hao et al. 2023, 2024; Feng et al. 2023). Although
a better reasoning path leads to a better performance, the
length of these reasoning chains leads to an exponential increase in the search space, resulting in substantial computational costs. Given the high expense of LLM inference, performing an online tree search for each reasoning problem
introduces repeated and unnecessary overhead.
To address this issue, step-level reward models (SRM)
**Step1**
-----
was proposed to improve search efficiency. Lightman et al.
(2023) introduced the process reward model (PRM), which
employs human-annotated step-level scores for reward modeling, and Ma et al. (2023) further demonstrated the effectiveness of SRMs in math reasoning and coding tasks. Then,
Math-Shepherd (Wang et al. 2024), systematically generates step-level preference data through exhaustive reasoning process traversal to train reward models and reinforce
the model’s capabilities. More recently, inspired by AlphaZero, Monte Carlo Tree Search (MCTS) (Xie et al. 2024;
Chen et al. 2024a,b) was then used for collecting preferences more efficiently because of its capability of balancing
exploration and exploitation. These trained SRMs can effectively enhance reasoning performance by either assisting
step-level preference alignment with proximal policy optimization (PPO) during training stage or serving as step verifiers during inference stage.
Despite the significant achievements in mathematical reasoning performance achieved by the SRMs constructed by
MCTS-based method, the exact workings of these reward
models and what they are truly rewarding remain unclear.
Brain and cognitive scientists have argued that diverse thinking and reasoning processes do not necessarily rely on natural language. (Fedorenko, Piantadosi, and Gibson 2024). A
skilled human mathematician, for instance, can determine
whether a mathematical expression is logically coherent and
numerically correct without the participation of the natural
language. Building on this idea, our research explores a similar hypothesis for LLMs: that natural language descrip**tions of thought processes are not essential for mathe-**
**matical reasoning within these models. We suppose that**
LLMs can be trained to recognize preferences for mathematical language directly during problem-solving, without
relying on natural language descriptions. This implies that
LLMs might be capable of understanding and processing
mathematical reasoning through the intrinsic structure of
mathematical language, potentially leading to more efficient
and focused training methods that bypass the need for natural language explanations. Furthermore, it is believed that
incorrect solutions often arise from wrong mathematical calculations or logical errors (Zhang et al. 2024), with the latter
being more challenging (Chen et al. 2024a). Therefore, we
further investigate the effectiveness of SRMs in evaluating
logical coherence in pure mathematical language, demonstrating that the improvements are not merely the result of
encouraging correct calculations within a single step. Additionally, and somewhat surprisingly, we found that SRMs
struggle to learn how to evaluate logical coherence in natural
language. This will further support that natural language is
not necessary for step-level reward modeling.
To investigate the respective roles of natural language and
mathematical language in step-level reward modeling, we
decompose each step of the reasoning path into two components: natural language descriptions of thought processes
and math expressions (Figure 1). The ablation studies are
conducted by selectively removing different parts from the
inputs of the SRMs. This decomposition mirrors the human
problem-solving process in mathematics, which typically involves an initial phase of thinking through the problem, fol
lowed by the execution of calculations based on that thought
process. The thought processes include the strategy to be
taken in that step, while the calculations are the executions
of the thought processes. In other words, our decomposition aims to separate the natural language (composing the
‘thoughts’) from the mathematical expressions (contained in
the execution of ‘thoughts’). This framework aims to foster
a deeper understanding of the role of natural language for
step-level reward modeling.
To summarize, our experiments support that SRMs appear
to have some intrinsic affinity for mathematical expression,
not natural language. Specifically, we propose the following
key insights.
1. Natural language descriptions of thought processes are
not necessary for successful step-level reward modeling.
2. SRMs not only promote accurate calculations within individual steps but also effectively assess the challenging
logical coherence in mathematical language.
3. Assessing logical coherence in natural language is difficult, and SRMs often struggle with this task.
#### Preliminaries
##### Markov Decision Process
**Definition** A Markov Decision Process (MDP) is a mathematical framework used to model decision-making problems. This framework is fundamental for addressing a wide
range of reinforcement learning (RL) problems where the
outcomes are partially random and partially controllable. An
MDP is defined by a tuple (S, A, P, R, γ), where:
- S is the set of states.
- A is the set of actions.
- P is the transition probability function, P (st+1|st, at),
which defines the probability of transitioning to state
_st+1 given the current state st and action at._
- R is the reward function, R(st, at, st+1), which defines
the reward received after transitioning from state st to
state st+1 by taking action at.
- γ is the discount factor, which determines the importance
of future rewards.
**Bellman Expectation Equation** For state value function
_V (s), the Bellman Expectation Equation is:_
_V_ _[π](s) = Ea∼π(·|s)_ �Es′∼P (·|s,a) [R(s, a, s[′]) + V _[π](s[′])]�_
For state-action value function Q(s, a), the Bellman Expectation is:
_Q[π](s, a) = Es′∼P (·|s,a)_ �R(s, a, s[′]) + Ea′∼π(·|s′) [Q[π](s[′], a[′])]�
**Optimal Value Functions** The optimal value functions
are defined as:
_V_ _[∗](s) = max_ _Vπ(s)_
_π_
(1)
_Q[∗](s, a) = max_ _Qπ(s, a)_
_π_
Therefore, the relationship between the optimal value functions and the Bellman Optimality Equation is:
_V_ _[∗](s) = max_ _Q[∗](s, a)_ (2)
_a_
-----
Step-Level
Reward Model
Mathematical Reasoning Supervised
Inputs of
Excluded Context
_i_
_i_
Full-Context
_i_
Math-Only
_i_
Single-Step Math-Only
Next-Thought
Figure 2: Illustration of the role of SRMs in mathematical reasoning and the SRMs with different input structures we investigate.
#### Setup
##### LLM’s Math Reasoning as MDP: Our Definition
Figure 2 shows the mathematical reasoning process with
each step decomposed into thought and math expressions.
Specifically, our MDP definition is as follows:
MDP = (S, A, P, R)
where:
- State The state space S consists of states defined as si =
(Tk, Ek)[i]k=0[, representing a sequence of thoughts][ T][k][ and]
equations Ek up to step i.
- Action The action space A consists of actions defined as
_ai = TI+1, representing the natural language descrip-_
tions of the subsequent thought proposed by the LLM.
- State Transition P (si+1|si, ai) is the state transition
function, defining the probability of transitioning to state
_si+1 from state si after taking action ai. This function_
is implemented by the LLM generating the corresponding math expression Ei+1 based on the next thought
_ai = Ti+1 and the current state si = (Tk, Ek)[i]k=0[.]_
- Reward Function R(si, ai, si+1) is the reward function,
defining the immediate reward received after transitioning to state si+1 = (Tk, Ek)[i]k[+1]=0 [from state][ s][i][ by taking]
action ai. We define the reward up to state si+1 based on
whether it can lead to the correct final answer:
�1, final answer is correct
_R(si, ai, si+1) =_ 0, final answer is incorrect (3)
Additionally, policy π(ai|si) is implemented by the LLM
generating the thought of the next step ai = Ti+1 based
on the current state si = (Tk, Ek)[i]k=0[. According to Equa-]
tion (1), the goal of an agent is to maximize Vπ(si) or
_Qπ(si, a) by generating the correct thoughts T in each step._
In summary, a language model plays a dual role in the
MDP framework:
(4)
where c(si+1) is the correct counts, N (si) and N (si+1)
are visit counts, and wexp balances exploration and exploitation. This process continues until an unexplored
node is found.
2. Expansion. Upon reaching a leaf node, n new candidate
actions (thoughts) {a[j]i _[|][ j][ = 1][, ..., n][}][ are generated by]_
the agent given the current state si. Given the candidate
1. As an Agent The LLM is responsible for making decisions by selecting appropriate actions (next thoughts
_Ti+1) at each state, following the policy π(ai|si)._
2. As a World Model The LLM also acts as the world
model P (si+1|si, ai) by predicting action outcomes
(state transitions) using its internal knowledge and training data. It simulates the environment of mathematical reasoning by executing thought Ti+1 through corresponding calculations, thus providing the prediction of
new states si+1.
##### MCTS for Step-Level Preference Collection
Understanding the natural correspondence between math
reasoning and MDP, we can readily use MCTS for efficient step-level preference collection. The MCTS starts
from a root node s0, which is a math problem in mathematical reasoning tasks. Then, each new node corresponds
to a state update. Each iteration of MCTS can be divided
into four phases: Selection, Expansion, Rollout, and Backpropagation.
1. Selection. The selection phase in MCTS involves traversing the tree from the root node s0 (the initial math problem) to a leaf node using a selection policy. This policy,
typically the Upper Confidence Bound for Trees (UCT)
formula, balances exploration and exploitation. At node
_si, the next node is chosen by:_
log N (si)
_N_ (si+1)
�
�
_s[∗]i+1_ [= arg max]si+1
�
_c(si+1)_
_N_ (si+1) [+][ w][exp][ ·]
_,_
-----
actions (thoughts), the world model will execute them
through mathematical calculations, constructing the new
candidate states {s[j]i _[|][ j][ = 1][, ..., n][}][. These candidate]_
states are added as child nodes to the current node to expand the tree, allowing for a broader exploration of potential problem-solving paths.
3. Rollout. The rollout phase simulates the reasoning process from the newly expanded node to a terminal state or
predefined maximum depth. The score of a node is then
obtained according to Equation (3). This procedure estimates the scores of the new nodes according to the simulation results, informing the back-propagation phase.
4. Back-propagation. Results from the rollout are propagated back up the tree to update values and visit counts of
each node. Starting from the final state, the effectiveness
of the problem-solving process updates the value V (s) of
each state. This procedure improves the selection policy
for future iterations.
After completing MCTS, step-level preference pairs can be
gathered by comparing the values of the nodes in each tree.
##### Step-level Reward Modeling
After collecting all the preference pairs, step-level reward
models can be constructed through contrastive learning.
Based on our MDP definition, an SRM is regarded as the
action-value function Q(s, a) or the value function V (s).
Specifically, we investigate different reward models for ablation studies, where reward models take different inputs
to evaluate the ongoing reasoning process. Accordingly, we
define four reward models (Figure 2-right) for the ablation
study:
- Full-Context Step-level Reward Model (FC-SRM)
This model takes both the thoughts and math expressions
of the current state as input.
_V1(si) = V1((Tk, Ek)[i]k=0[)]_ (5)
- Math-Only Step-level Reward Model (MO-SRM)
This model takes only the math expressions of the current state as input, excluding the natural language descriptions of thought processes.
_V2(si) = V2((Ek)[i]k=0[)]_ (6)
- Single-Step Math-Only Step-level Reward Model
**(SSMO-SRM) This model takes only the newest math**
expression of the ongoing reasoning process as input, excluding the natural language and all the previous math
expressions.
_V3(si) = V3(Ei)_ (7)
- Next-Thought Step-level Reward Model (NT-SRM)
This model takes both the thoughts and math expressions of the current state as input, and evaluates the next
thought. According to our definition, the next thought is
the action taken by the agent. Thus this reward model
is the action-value function under our MDP definition of
mathemetical reasoning.
_Q(si, ai) = Q((Tk, Ek)[i]k=0[, T][i][+1][)]_ (8)
##### Beam Search with Step-Level Reward Model
Given the SRMs trained on the preference data, it is commonly used for step-level preference alignment to update the
policy. The purpose of this procedure is to generate the best
action through the updated policy π[′], thereby reducing the
overhead caused by online MCTS. It is also possible to update the world model P with these preference pairs as better
accuracy indicates better mathematical performance.
Algorithm 1: Beam Search Algorithm
**Require: Initial state s0, beam size B, candidate count c**
1: Initialize beam B ←{s0}
2: while B is not empty do
3: Initialize empty list Bnext ←∅
4: **for each state si in B do**
5: Generate a set of candidate actions
_{a[1]i_ _[, a]i[2][, . . ., a]i[c][}][ based on][ s][i]_
6: **for each action a[j]i** [in][ {][a]i[1][, a]i[2][, . . ., a]i[c][}][ do]
7: Compute the next state s[j]i+1 _i_ [)]
_[←]_ _[P]_ [(][s][i][+1][|][s][i][, a][j]
8: Evaluate the score of s[j]i+1
9: Add s[j]i+1 [to][ B][next]
10: **end for**
11: **end for**
12: Sort Bnext by score and keep the top B states
13: Update beam B ← top B states from Bnext
14: end while
15: return the best state from the final beam
As this study focuses on the SRMs, our experiments
will not include the preference alignment procedure. Instead, we can use the SRMs as the scoring function during
beam search (BS) Algorithm 1 for simplification. This simplification excludes potential uncertainties in the alignment
process, providing a more straightforward understanding of
SRMs’ effectiveness. Notably, setting B = 1 makes BS
**effectively become greedy search (GS).**
The greedy search can be regarded as a reasoning process supervised by an SRM (Figure 2-left). Indeed, with an
infinite number of samples, the optimal actions and states
identified through the policy π and the world model P will
converge to the optimal actions and states similar to those
generated by the optimal policy π[∗] in Equation (1), respectively.
lim _Q(s, at) = arg max_
_n→∞_ _[P]_ [(arg max]{at}t[n]=0 _a∈Aπ(s)_ _[Q][(][s, a][)) = 1][ (9)]_
where at ∼ _π(a|s) and Aπ(s) denotes the state space of_
actions generated by the policy π given state s. Similarly,
for states, we also have
lim _V (s[′]t[) = arg max]_ _V (s[′])) = 1_ (10)
_n→∞_ _[P]_ [(arg max]{s[′]t[}][n]t=0 _s[′]∈S(s,a)_
where st ∼ Eat−1∈π(a|st−1)P (s|st−1, at−1).
#### Experiments
##### Implementation Details
**Datasets** To construct step-level preference pairs through
MCTS, we use the math problems and their corresponding
-----
**Agent & World Model** **Historical** **Historical** **Next** **Next** **Accuracy (Gain) %**
**Thoughts** **Equations** **Thoughts** **Equations**
Llama-3-8B-Instruct **GSM8K** **MATH**
Pass@1 (3-shots) 78.47 (+0.00) 31.16 (+0.00)
+GS w/ SRM (DeepSeek-Math-7B-Base)
Full-Context SRM ✓ ✓ ✓ ✓ 86.20 (+7.73) 38.58 (+7.42)
Math-Only SRM ✗ ✓ ✗ ✓ 85.82 (+7.35) 39.64 (+8.48)
Single-Step Math-Only SRM ✗ ✗ ✗ ✓ 82.11 (+3.64) 37.46 (+6.30)
Next-Though SRM ✓ ✓ ✓ ✗ 79.38 (+0.91) 30.98 (-0.18)
+GS w/ SRM (Qwen2-7B)
Full-Context SRM ✓ ✓ ✓ ✓ 82.94 (+4.47) 35.58 (+4.42)
Math-Only SRM ✗ ✓ ✗ ✓ 83.78 (+5.31) 35.10 (+3.94)
Single-Step Math-Only SRM ✗ ✗ ✗ ✓ 81.65 (+3.18) 33.08 (+1.92)
Next-Though SRM ✓ ✓ ✓ ✗ 81.73 (+3.26) 31.40 (+0.24)
Table 1: SRMs act as step-level scoring functions during GS. Sample c = 5 candidates of the subsequent step at each node and
use beam size B = 1 (greedy search). The agent and the environment model is Llama-3-8B-Instruct. The reward models are
trained based on Deepseek-Math-7B-Base or Qwen2-7B.
final answers from the training data of GSM8K (Cobbe et al.
2021) and MATH (Hendrycks et al. 2021). The accuracies
are evaluated on the test data.
**Models** The reasoning process is conducted by the dialogue between two LLMs. We use the Llama-3-8B-Instruct
(Dubey et al. 2024) as both the agent and world model in
MCTS because of its excellent ability to follow instructions.
**Prompt** One LLM (as agent) is instructed to generate natural language descriptions of thoughts, and the other (as
world model) is instructed to execute the thoughts. For specific prompts, see Appendix.
**Baseline** We use Llama-3-8B-Instruct construct the
‘Pass@1’ baseline based on our prompt with 3 shots.
**MCTS for Step-Level Preference Collection** The MCTS
requires the agent sampling n = 6 candidate actions at each
expansion phase and iterates 500 times on each problem to
evaluate the quality of each node. Notably, to avoid the influence of the variation of answer format, we use a supervised
fine-tuned (SFT) model based on DeepSeek-Math-7B-Base
to assert the correctness of the solution after each rollout
during the search. This model is also used in our evaluation
pipeline. To strengthen the preferences, only the preference
pairs whose difference of value is greater than 0.7 are assumed valid. For detailed hyperparameters, see Appendix.
**Reward Training** DeekSeek-Math-7B-Base (Shao et al.
2024) or Qwen2-7B (Yang et al. 2024) is used as the base
model for SRM training. Each SRM is trained on two instances, with each instance equipped with 8 A100 GPUs.
For detailed hyperparameters, see Appendix.
##### Main Results
After collecting all the step-level preference pairs through
MCTS, datasets are constructed for FC-SRM, MO-SRM,
SSMO-SRM, and NT-SRM training by selecting the corresponding components in each piece of data. The training
curves are shown in Figure 3. These SRMs are subsequently
used as scoring functions in greedy search, the accuracy and
absolute gains over baseline are reported in Table 1. The
analyses will be included in the following sections.
##### Do we really need natural language?
Intuitively, one might expect that natural language descriptions provide essential contextual information and aid
SRMs’ cognitive understanding. The SRMs with different
input formats: full-context (FC) and math-only (MO) are
trained to investigate this aspect.
GSM8K MATH
87.5
40.0
85.0
37.5
82.5
35.0
80.0
32.5
77.5
30.0
Setup Setup
SRM Base
-- Qwen2-7B deepseek-math-7b-base
Figure 4: SRMs take only mathematical expressions as input demonstrate the same ability during the greedy search
as those take full context as input. The boxplot is obtained
through 20 runs over the dataset.
**Removing natural language has a minimal effect on step-**
**level reward modeling.** FC-SRMs and MO-SRMs exhibit
very similar performance in both preference prediction accuracy and greedy search, suggesting that successful step-level
reward modeling is not contingent upon natural language descriptions, which is contrary to intuition. Even without the
natural language descriptions of thoughts at each step, the
-----
Figure 3: Effect of natural language descriptions and math expressions on step-level reward modeling. The agent and the
environment model is Llama-3-8B-Instruct. The reward models are trained based on Qwen2-7B or Deepseek-Math-7B-Base.
(Note that the ‘accuracy’ here is the accuracy of preference during reward training.)
MO-SRMs can still be successfully trained (Figure 3). Table 1 and Figure 4 further show the performance of these
SRMs when used as scoring functions during greedy search.
In setups such as MATH with DeekSeek-Math-7B-Base
as the base model of SRM, the MO-SRM (39.64%) can
even outperform the FC-SRM (38.58%). We further conducted t-tests to provide a more detailed statistical comparison between the FC-SRMs and MO-SRMs across different datasets and base models. For the GSM8K dataset,
the t-test results are t = 0.18, p = 0.86 for Qwen2-7B,
_−_
and t = 0.14, p = 0.89 for deepseek-math-7b-base. For
_−_
the MATH dataset, the results are t = 0.79, p = 0.44 for
Qwen2-7B, and t = 0.77, p = 0.45 for deepseek-math-7bbase. In all cases, the p-values are greater than 0.05, indicating that the differences in performance between the FCSRM and MO-SRM are not statistically significant. These
results support the conclusion that omitting natural language
from the inputs of SRMs has negligible effects on the effectiveness of SRMs.
##### Can SRMs evaluate logical coherence in math language?
The success of MCTS-based methods is attributed to the
ability to avoid logical and numerical errors. It is commonly
believed that logical errors are more difficult to evaluate,
while MCTS-based methods are believed a competitive solution to this challenge by collecting such preferences. In
this section, we investigate the role of natural language and
mathematical language in assessing the logical coherence included in pure mathematical language by comparing SSMOSRM, MO-SRM, and NT-SRM.
Specifically, if the contextual information in the input of
an SRM is useful, its performance should surpass that of
SSMO-SRM, which takes only the current step as input.
This ability is referred to as the model’s capacity to assess logical coherence, meaning it can determine whether a
subsequent step logically follows from the information and
conclusions derived in the previous context. The results are
shown in Table 1.
**LLMs can be trained to evaluate logical coherence in**
**pure mathematical language.** For DeepSeek-Math-7BBase, MO-SRM achieves an accuracy gain of +7.35% on
GSM8K and +8.48% on MATH, which is higher than the
gains +3.64% and 6.30% observed for SSMO-SRM. Similarly, for Qwen2-7B, MO-SRM achieves an accuracy gain
of +5.31% on GSM8K and +3.94% on MATH, higher than
that of SSMO-SRM +3.18% and +1.92%. This substantial
difference indicates that MO-SRM, which considers the full
sequence of mathematical expressions, is effective at capturing logical coherence, rather than only focusing on the
-----
current step. This finding indicates that logical coherence in
mathematical language can be assessed by LLMs as SRMs.
**The SRMs have difficulties being trained to evaluate**
**the logical coherence in the form of natural language.**
Based on our MDP definition, even after the mathematical
expressions are stripped away from the current reasoning
step, the natural language descriptions still include the details of the actions to be executed. In other words, the SRMs
should be able to learn from these constructed preferences to
identify which actions are useful for problem-solving. However, as shown in Figure 3, the dashed curves illustrate the
challenges in training NT-SRMs, which were designed to
evaluate the quality of the next thoughts. The training processes across various datasets and base models consistently
demonstrate the difficulty in identifying preferences based
solely on the descriptions of thoughts during reward training. The results presented in Table 1 further highlight the
poor performance of NT-SRMs when used as scoring functions. These findings suggest that the implicit logic conveyed
through natural language is difficult for LLMs to capture and
evaluate effectively.
##### Additional Analysis
on GSM8K and +5.64% on MATH) (Table 2). This further
illustrates the ability of the SRMs to focus exclusively on
mathematical language.
**Effect of base models for MO-SRM** The choice of SRM
base models impacts performance (Figure 5), while this
effect doesn’t appear to be entirely related to the base
model’s mathematical abilities. Despite its excellent mathematical capabilities, The surprising underperformance of
Llama-3-8B compared to Llemma-7B (Azerbayev et al.
2023), Qwen2-7B, and DeekSeek-Math-7B-Base, suggests
that factors beyond just original mathematical ability are at
play. This might be due to the challenges in self-assessment
or other reasons to be explored.
**Agent & World Model** **Accuracy**
Llama-3-8B-Instruct **GSM8K** **Accuracy**
+BS w/ MO-SRM[1]
_B = 1, c = 5_ 85.82 39.64
_B = 1, c = 10_ 85.90 40.06
_B = 3, c = 10_ 88.17 40.24
Table 3: Effect of B and c on beam search
**Agent & World Model** **Accuracy (Gain) %**
Llama-3-70B-Instruct **GSM8K** **MATH**
90.37 48.48
Pass@1 (3-shots)
(+0.00) (+0.00)
92.95 54.12
+GS /w MO-SRM[1]
(+2.58) (+5.64)
Table 2: Supervise a larger model (Llama-3-70B-Instruct).
**Effect of B and c on beam search** Increasing the beam
size B and the number of candidate count c will slightly
improve accuracy, but this improvement will eventually
plateau, as shown in Table 3.
#### Conclusion
90
85
80
75
70
Figure 5: The performance of SRM is affected by the ability
of the base model.
**Supervising a larger model** Despite being trained on
preference data generated by a smaller model, the MO-SRM
was able to effectively guide the reasoning process of a
larger model and achieve substantial improvements (+2.58%
1The MO-SRM here is trained based on DeepSeek-Math-7BBase with preference data generated through MCTS performed by
Llama-3-8B-Instruct.
Our investigation into the role of natural language and mathematical expressions in step-level reward modeling reveals
that natural language descriptions are not essential for the
success of these models. Through extensive experiments, we
demonstrated that reward models operating solely on mathematical expressions perform comparably to those that incorporate both natural language and math. Furthermore, the
difficulty in training models to evaluate the coherence of natural language thought processes underscores the challenges
LLMs face in capturing implicit logical structures through
language alone. We also found that the coherence of logical structure inherent in mathematical expressions can be
assessed by SRMs trained based on LLMs. Given the overhead of obtaining step-level rewards, these findings offer
new insights for developing more efficient and targeted reward models by isolating the most impactful components of
mathematical reasoning steps.
#### Acknowledgments
This work was supported in part by National Key R&D
Program of China, under Grant No. 2022YFC3303600
and in part by Key Laboratory of Smart Education of
Guangdong Higher Education Institutes, Jinan University
(2022LSYS003).
|GSM8K|Col2|Col3|Col4|Col5|Col6|Col7|Col8|50|MATH|Col11|Col12|Col13|Col14|Col15|Col16|Col17|Col18|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|||||||||50 (%) 40 racy||||||||||
|||||||||||||||||||
|||||||||||||||||||
|||||||||||||||||||
|||||||||||||||||||
|||||||||||||||||||
|||||||||||||||||||
||a s 5 ba vi en bl m|||||||||a s||w /|M|O -S|R|||
-----
#### References
Azerbayev, Z.; Schoelkopf, H.; Paster, K.; Dos Santos, M.;
McAleer, S. M.; Jiang, A. Q.; Deng, J.; Biderman, S.; and
Welleck, S. 2023. Llemma: An Open Language Model for
Mathematics. In The Twelfth International Conference on
_Learning Representations._
Besta, M.; Blach, N.; Kubicek, A.; Gerstenberger, R.;
Podstawski, M.; Gianinazzi, L.; Gajda, J.; Lehmann, T.;
Niewiadomski, H.; Nyczyk, P.; et al. 2024. Graph of
thoughts: Solving elaborate problems with large language
models. In Proceedings of the AAAI Conference on Artifi_cial Intelligence, volume 38, 17682–17690._
Chen, G.; Liao, M.; Li, C.; and Fan, K. 2024a. AlphaMath
Almost Zero: process Supervision without process. arXiv
_preprint arXiv:2405.03553._
Chen, G.; Liao, M.; Li, C.; and Fan, K. 2024b. Step-level
Value Preference Optimization for Mathematical Reasoning.
In Conference on Empirical Methods in Natural Language
_Processing._
Cobbe, K.; Kosaraju, V.; Bavarian, M.; Chen, M.; Jun, H.;
Kaiser, L.; Plappert, M.; Tworek, J.; Hilton, J.; Nakano, R.;
et al. 2021. Training verifiers to solve math word problems.
_arXiv preprint arXiv:2110.14168._
Ding, R.; Zhang, C.; Wang, L.; Xu, Y.; Ma, M.; Zhang, W.;
Qin, S.; Rajmohan, S.; Lin, Q.; and Zhang, D. 2023. Everything of thoughts: Defying the law of penrose triangle for
thought generation. arXiv preprint arXiv:2311.04254.
Dubey, A.; Jauhri, A.; Pandey, A.; Kadian, A.; Al-Dahle, A.;
Letman, A.; Mathur, A.; Schelten, A.; et al. 2024. The Llama
3 Herd of Models.
Fedorenko, E.; Piantadosi, S. T.; and Gibson, E. A. 2024.
Language is primarily a tool for communication rather than
thought. Nature, 630(8017): 575–586.
Feng, X.; Wan, Z.; Wen, M.; Wen, Y.; Zhang, W.; and Wang,
J. 2023. Alphazero-like Tree-Search can Guide Large Language Model Decoding and Training. In NeurIPS 2023
_Foundation Models for Decision Making Workshop._
Hao, S.; Gu, Y.; Luo, H.; Liu, T.; Shao, X.; Wang, X.; Xie,
S.; Ma, H.; Samavedhi, A.; Gao, Q.; et al. 2024. LLM Reasoners: New Evaluation, Library, and Analysis of Step-byStep Reasoning with Large Language Models. In ICLR 2024
_Workshop on Large Language Model (LLM) Agents._
Hao, S.; Gu, Y.; Ma, H.; Hong, J.; Wang, Z.; Wang, D.; and
Hu, Z. 2023. Reasoning with Language Model is Planning
with World Model. In Proceedings of the 2023 Confer_ence on Empirical Methods in Natural Language Process-_
_ing, 8154–8173._
Hendrycks, D.; Burns, C.; Kadavath, S.; Arora, A.; Basart,
S.; Tang, E.; Song, D.; and Steinhardt, J. 2021. Measuring Mathematical Problem Solving With the MATH Dataset.
In Thirty-fifth Conference on Neural Information Processing
_Systems Datasets and Benchmarks Track (Round 2)._
Li, X.; Bai, Y.; Guo, T.; Liu, Z.; Huang, Y.; Zhao, X.; Xia,
F.; Luo, W.; and Weng, J. 2024. Enhancing Length Generalization for Attention Based Knowledge Tracing Models
with Linear Biases. In 33rd International Joint Conference
_on Artificial Intelligence, IJCAI 2024, 5918–5926. Interna-_
tional Joint Conferences on Artificial Intelligence.
Lightman, H.; Kosaraju, V.; Burda, Y.; Edwards, H.; Baker,
B.; Lee, T.; Leike, J.; Schulman, J.; Sutskever, I.; and Cobbe,
K. 2023. Let’s Verify Step by Step. In The Twelfth Interna_tional Conference on Learning Representations._
Ma, Q.; Zhou, H.; Liu, T.; Yuan, J.; Liu, P.; You, Y.; and
Yang, H. 2023. Let’s reward step by step: Step-Level reward model as the Navigators for Reasoning. arXiv preprint
_arXiv:2310.10080._
Shao, Z.; Wang, P.; Zhu, Q.; Xu, R.; Song, J.; Zhang, M.; Li,
Y.; Wu, Y.; and Guo, D. 2024. Deepseekmath: Pushing the
limits of mathematical reasoning in open language models.
_arXiv preprint arXiv:2402.03300._
Wang, P.; Li, L.; Shao, Z.; Xu, R.; Dai, D.; Li, Y.; Chen, D.;
Wu, Y.; and Sui, Z. 2024. Math-shepherd: Verify and reinforce llms step-by-step without human annotations. In Pro_ceedings of the 62nd Annual Meeting of the Association for_
_Computational Linguistics (Volume 1: Long Papers), 9426–_
9439.
Wang, X.; Wei, J.; Schuurmans, D.; Le, Q. V.; Chi, E. H.;
Narang, S.; Chowdhery, A.; and Zhou, D. 2022. SelfConsistency Improves Chain of Thought Reasoning in Language Models. In The Eleventh International Conference on
_Learning Representations._
Wei, J.; Wang, X.; Schuurmans, D.; Bosma, M.; Xia, F.;
Chi, E.; Le, Q. V.; Zhou, D.; et al. 2022. Chain-ofthought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems,
35: 24824–24837.
Xie, Y.; Goyal, A.; Zheng, W.; Kan, M.-Y.; Lillicrap, T. P.;
Kawaguchi, K.; and Shieh, M. 2024. Monte Carlo Tree
Search Boosts Reasoning via Iterative Preference Learning.
_arXiv preprint arXiv:2405.00451._
Yang, A.; Yang, B.; Hui, B.; Zheng, B.; Yu, B.; Zhou, C.; Li,
C.; Li, C.; Liu, D.; Huang, F.; et al. 2024. Qwen2 Technical
Report. CoRR.
Yao, S.; Yu, D.; Zhao, J.; Shafran, I.; Griffiths, T.; Cao,
Y.; and Narasimhan, K. 2024. Tree of thoughts: Deliberate problem solving with large language models. Advances
_in Neural Information Processing Systems, 36._
Zhan, B.; Guo, T.; Li, X.; Hou, M.; Liang, Q.; Gao, B.; Luo,
W.; and Liu, Z. 2024. Knowledge tracing as language processing: A large-scale autoregressive paradigm. In Inter_national Conference on Artificial Intelligence in Education,_
177–191. Springer.
Zhang, D.; Li, J.; Huang, X.; Zhou, D.; Li, Y.; and Ouyang,
W. 2024. Accessing GPT-4 level Mathematical Olympiad
Solutions via Monte Carlo Tree Self-refine with LLaMa-3
8B. arXiv preprint arXiv:2406.07394.
Zhao, W. X.; Zhou, K.; Li, J.; Tang, T.; Wang, X.;
Hou, Y.; Min, Y.; Zhang, B.; Zhang, J.; Dong, Z.; et al.
2023. A survey of large language models. arXiv preprint
_arXiv:2303.18223._
Zheng, Y.; Li, X.; Huang, Y.; Liang, Q.; Guo, T.; Hou, M.;
Gao, B.; Tian, M.; Liu, Z.; and Luo, W. 2024. Automatic
-----
Lesson Plan Generation via Large Language Models with
Self-critique Prompting. In International Conference on Ar_tificial Intelligence in Education, 163–178. Springer._
Zhou, D.; Sch¨arli, N.; Hou, L.; Wei, J.; Scales, N.; Wang,
X.; Schuurmans, D.; Cui, C.; Bousquet, O.; Le, Q. V.; et al.
2023. Least-to-Most Prompting Enables Complex Reasoning in Large Language Models. In The Eleventh Interna_tional Conference on Learning Representations._
-----
#### Implementation Details
##### Prompts
System message (Agent)
You should act as a guide. You will break down the process into individual, understandable guidance step-by-step, each
leading logically to the final result. I will follow your guidance by calculating the answer to each step with equations.
### Your response must meet the following requirements:
1. Never say anything not related to the math problem.
2. You should not include any calculations in your instruction as that is the student’s work.
3. If the current math problem is ready to be solved by following your next guidance, start it with “Now you can answer
the problem in this step.”.
4. If the final answer to the current math problem has been obtained, just say “The math problem has been solved.”
System message (World Model-GSM8K)
You are a student solving math problems under the instructions of the teacher. You should follow the step-by-step
guidance posed by the teacher by calculating the answer of each step with equations until you deduce the final answer
to the math problem.
### Your response must meet the following requirements:
1. Never talk about anything not related to the math problem.
2. Include the equation of this step.
3. If the guidance starts with “Now you can answer the problem in this step.”, you must find the final answer to the
problem in this step.
4. End with “The answer is” along with a single number to highlight the numerical (sub)answer (e.g. “The answer is
42.”).
System message (World Model-MATH)
You are a student solving math problems under the instructions of the teacher. You should follow the step-by-step
guidance posed by the teacher by calculating the answer of each step with equations until you deduce the final answer
to the math problem.
### Your response must meet the following requirements:
1. Include the equation of this step.
2. If the subquestion is started with start it with “Now you can answer the problem in this step.¨, you must find the final
answer to the problem in this step.
3. You must use the LaTeX code “
boxed” to highlight the final answer to the problem. (e.g. “(9 + 1)[3] = 10[3]
##### Hyperparameters
**MCTS** The hyperparameters of MCTS are shown in Table .1.
-----
Hyperparameter Value
_n (n candidates)_ 6
depth limit 8
_wexp_ 1.0
temperature (agent) 1.3
temperature (world) 0.7
n iteration 500
Table .1: Hyperparameters of MCTS
**Step-Level Reward Modeling** The hyperparameters for step-level reward modeling are shown in Table .2.
Hyperparameter Value
n instances 2
gpus per instance 8
per device train batch size 16
gradient accumulation steps 2
num train epochs 2
warmup ratio 0.03
learning rate 1.41e-5
weight decay 0.1
Table .2: Hyperparameters of MCTS
**BS w/ SRM** The hyperparameters for BS w/ SRM are shown in Table .3.
Hyperparameter Value
_n (n candidates)_ 5 or 10
beam size 1 or 3
temperature (agent) 0.7
temperature (world) 0.0
Table .3: Hyperparameters of MCTS
##### Example
**Greedy Search Supervised by an SRM** For a better understanding of our definition of the mathematical reasoning process
supervised by the SRMs, we provide an example (Figure .1) of a greedy search, where the rewards are from the MO-SRM.
-----
MO-SRM: 0.96191
MO-SRM: 4.03516
Figure .1: An example of a case where the MO-SRM is used to supervise the GS.
-----
##### Addtional Results Tendency of encouraging shorter paths
We observed that the greedy search with the SRMs tends to encourage shorter reasoning paths, although the MCTS itself does
not explicitly include the path length as a preference. (Figure .2) This observation is due to the insufficient exploitation of the
MCTS process, but we need further investigation to confirm this proposition in future studies.
GSM8K MATH
0.40
0.86
0.84 0.35
0.82
0.30
0.80
0.25
2.2 2.4 2.6 1.6 1.8
Mean steps to correct solution Mean steps to correct solution
Figure .2: Accuracy v.s. mean steps to correct solutions. Fewer steps to correct solutions tend to have higher accuracy.
-----
| [
"Zitao, Liu",
"Yiran, Ma",
"Zui, Chen",
"Zhuo, Liu",
"Mi, Tian",
"Weiqi, Luo",
"Tianqiao, Liu"
] | 2024-12-20T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2412.15904 | https://arxiv.org/abs/2412.15904 | https://www.semanticscholar.org/paper/43cae70c4a3bcda9a7192308063a3573e17b8567 |
What Happened in LLMs Layers when Trained for Fast vs. Slow Thinking: A Gradient Perspective | What makes a difference in the post-training of LLMs? We investigate the training patterns of different layers in large language models (LLMs), through the lens of gradient, when training with different responses and initial models. We are specifically interested in how fast vs. slow thinking affects the layer-wise gradients, given the recent popularity of training LLMs on reasoning paths such as chain-of-thoughts (CoT) and process rewards. In our study, fast thinking without CoT leads to larger gradients and larger differences of gradients across layers than slow thinking (Detailed CoT), indicating the learning stability brought by the latter. Moreover, pre-trained LLMs are less affected by the instability of fast thinking than instruction-tuned LLMs. Additionally, we study whether the gradient patterns can reflect the correctness of responses when training different LLMs using slow vs. fast thinking paths. The results show that the gradients of slow thinking can distinguish correct and irrelevant reasoning paths. As a comparison, we conduct similar gradient analyses on non-reasoning knowledge learning tasks, on which, however, trivially increasing the response length does not lead to similar behaviors of slow thinking. Our study strengthens fundamental understandings of LLM training and sheds novel insights on its efficiency and stability, which pave the way towards building a generalizable System-2 agent. Our code, data, and gradient statistics can be found in: https://github.com/MingLiiii/Layer_Gradient. | In this study, fast thinking without CoT leads to larger gradients and larger differences of gradients across layers than slow thinking (Detailed CoT), indicating the learning stability brought by the latter. | ## What Happened in LLMs Layers when Trained for Fast vs. Slow Thinking: A Gradient Perspective
### Ming Li University of Maryland [email protected]
Abstract
### Yanhong Li University of Chicago [email protected]
### Tianyi Zhou University of Maryland [email protected]
##### What makes a difference in the post-training of LLMs? We investigate the training patterns of different layers in large language models (LLMs), through the lens of gradient, when training with different responses and initial models. We are specifically interested in how fast vs. slow thinking affects the layer-wise gradients, given the recent popularity of train- ing LLMs on reasoning paths such as chain- of-thoughts (CoT) and process rewards. In our study, fast thinking without CoT leads to larger gradients and larger differences of gradi- ents across layers than slow thinking (Detailed CoT), indicating the learning stability brought by the latter. Moreover, pre-trained LLMs are less affected by the instability of fast think- ing than instruction-tuned LLMs. Additionally, we study whether the gradient patterns can re- flect the correctness of responses when training different LLMs using slow vs. fast thinking paths. The results show that the gradients of slow thinking can distinguish correct and irrele- vant reasoning paths. As a comparison, we con- duct similar gradient analyses on non-reasoning knowledge learning tasks, on which, however, trivially increasing the response length does not lead to similar behaviors of slow thinking. Our study strengthens fundamental understandings of LLM training and sheds novel insights on its efficiency and stability, which pave the way to- wards building a generalizable System-2 agent. Our code, data, and gradient statistics can be found in: https://github.com/MingLiiii/ Layer_Gradient.
### 1 Introduction
#### Large language models (LLMs) excel at various complex tasks (Zhao et al., 2023b; Xu et al., 2024). But their complexity notoriously makes them “black-box” whose inner mechanisms and training behaviors remain mysterious (Zhao et al., 2023a; Singh et al., 2024). How do they acquire reasoning capabilities or knowledge? When do
#### they make mistakes, and why? What change was made to each layer during training? This lack of transparency extends to issues like unintentional generation of harmful or biased content (Huang et al., 2024; Li et al., 2024a) or hallucinations (Huang et al., 2023) and might hinder further understanding and mitigation of them. Interpretable machine learning either develops models that are inherently interpretable (Rudin et al., 2022) or adopts post-hoc interpretability methods (Krishna et al., 2024), which do not alter the underlying model architecture but analyze models after training completes (Gurnee et al., 2023; Zou et al., 2023; Wang et al., 2023a; Wu et al., 2024). Despite the broad advancements in interpreting static models, gradients on dynamic training patterns of LLMs remains underexplored, especially on how these gradients scale and distribute across different layers. Such an understanding is crucial as it directly reflects how the LLMs perceive training data and align with them. Recently, there has been a growing trend for layer-related methods for LLMs: Gao et al. (2024) propose that higher layers of LLM need more LoRA; Li et al. (2024e) identify some of the layers in LLM related to safety, etc (Men et al., 2024; Chen et al., 2024). However, for the layer-wise analysis of LLMs, current research mainly employs probing methods (Alain and Bengio, 2017; Ju et al., 2024; Jin et al., 2024; Ye et al., 2024) that assess model behavior by observing changes in performance when certain layers are modified or removed (Wang et al., 2023a; Fan et al., 2024). These studies have been instrumental in illustrating how different layers capture and process various types of information while they often do not provide direct insights into the gradients that drive the learning process. Hence, we are motivated to move a step forward by directly investigating the layer-wise gradients inside LLMs. Our study focuses on the post-training gradient
-----
|Task Type|Dataset|Correct Irrelevant|None CoT Simplified CoT Detailed CoT (GPT4o)|Base LLMs Instructed LLMs|
|---|---|---|---|---|
|Math|AQuA GSM8K MATH-Algebra MATH-Counting MATH-Geometry|✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓|✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓|✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓|
|Commonsense|StrategyQA ECQA CREAK Sensemaking|✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓|✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓|✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓|
|Wiki Knowledge|Popular (Length 100) Popular (Length 500) Popular (Length 1000) Unpopular|✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓|✓ ✓ ✓ ✓|✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓|
##### Table 1: The scope of our study. We compare the gradient patterns across different layers when training pretrained base LLMs vs. instruction-finetuned LLMs using correct vs. irrelevant responses, slow vs. fast thinking (None CoT, Simplified CoT, and Detailed CoT generated by GPT-4o) responses, on three types of tasks: Math, Commonsense Reasoning, and Wiki Knowledge Learning. The comparison of slow vs. fast thinking only applies to the first two types of tasks, and it is replaced by the comparison between different lengths of responses on the third type of task. Our study is conducted on 5 pretrained base LLMs and 5 instruction-finetuned LLMs.
#### of LLMs for instruction-tuning on instruction- response pairs (Mishra et al., 2021; Wei et al., 2022; Wang et al., 2023b; Taori et al., 2023; Xu et al., 2023; Li et al., 2024d,c,b; Zhang et al., 2023). Instead of finetuning LLMs, we investigate the layer-wise gradients of 5 base LLMs and 5 instruction-finetuned LLMs on different data, including (1) three types of tasks including Math, Commonsense Reasoning, and Knowledge Extrac- tion, with several datasets per task type; (2) correct vs. irrelevant responses; and (3) fast vs. slow think- ing, which corresponds to different levels of Chain of Thought (CoT) (Wei et al., 2023) reasoning paths. Table 1 summarizes the scope of our study. Our study is based on comparisons of layer- wise gradients in terms of their spectral properties achieved by Singular Value Decomposition (SVD), focusing particularly on the projection layers for Query, Key, Value, and Output in transformer ar- chitectures (Vaswani et al., 2017). Specifically, we measure the gradient by its nuclear norm, compare the gradient norms across layers, and measure the sensitivity of gradients to different training data or initial models by the difference in gradient norm. These metrics serve as quantitative tools for exam- ining the training behaviors and shed novel insights that could inform more efficient training strategies and analyses of model stability. Main Contribution.[1] This paper investigates
1In addition to the observations and analysis included in
###### this paper, all the gradient statistics (that cost thousands of GPU hours) within our experimental scope will be released in our GitHub repository. At the sample level, the instruction- response pair, and the corresponding loss value are included. At the layer level, the mean, maximum, and minimum values, the Frobenius norm, and Nuclear norm, and the maximum
#### the behaviors of the gradient across different layers of LLMs through a spectral analysis of the layer-wise gradients. We compare the gradients of slow vs. fast thinking rationals when training different initial models using correct vs. irrelevant responses on different tasks. The difference in gradients reflects how these factors affect training dynamics, and, reversely, how sensitive LLM training is to these factors. Our observations reveal previously unrecognized patterns and shed novel insights for improving the stability and efficiency of LLM training. Our key findings are highlighted in the following:
1. Training LLMs for slow thinking (Detailed CoT) leads to similar gradient norms of different lay- ers, while fast thinking (Simplified/None CoT) results in larger gradients (fast forgetting) of ear- lier layers and drastic differences across layers.
2. The gradient of slow thinking (Detailed CoT) helps distinguish correct responses from irrele- vant responses. Without CoT, the gradient pat- terns of the two types of responses are similar.
3. The instruction-finetuned LLMs do not show superior capability over pre-trained base LLMs in identifying incorrect reasoning paths.
4. The above observations on reasoning tasks (math and commonsense) cannot be extended to knowl- edge learning tasks, where simply increasing re- sponse length does not show similar gradient patterns as slow thinking.
###### and minimum singular values of each gradient matrix are included. We hope these gradient statistics can contribute to the community’s understanding of the gradient behaviors for different settings.
-----
### 2 Methodology
#### 2.1 Preliminaries
In our experiments, we utilize the most commonly used instruction tuning settings to investigate the gradients for LLM fine-tuning. Given an instruction-tuning dataset D, each data sample is represented by a tuple (ins, res), where ins represents the instruction and res represents the corresponding response. Let pθ denote the LLM with parameters θ. In the instruction tuning setting, pθ is typically fine-tuned by minimizing the following loss on each sample (ins, res), in which resj represents the jth token of response res, res<j represents the tokens before resj, and l represents the token length of res:
#### GX,i = U ΣV [T] (2)
where U ∈ R[m][×][m] is an orthogonal ma- trix containing the left singular vectors; Σ ∈ R[m][×][n] is a diagonal matrix with singular values σ1, σ2, . . ., σmin{m,n}; V ∈ R[n][×][n] is an orthogo- nal matrix containing the right singular vectors. For simplicity, the subscript for these intermediate matrices is omitted. Nuclear Norm: The nuclear norm of G is defined as the ℓ1 norm of the singular values, which reflects the sparsity of the spectrum and serves as a convex surrogate of the matrix rank. Hence, it does not only quantify the gradient magnitude but also the concentration of the spectrum on its top few singular values, which is vital to understand the gradient patterns in each layer, i.e.,
#### Lθ = [1]
l
###### l
#### �
− log pθ (resj|ins, res<j), (1)
###### j=1
###### min{m,n}
#### �
|σj| (3)
###### j=1
#### 2.2 Gradient Representation
The attention mechanism (Vaswani et al., 2017) is one of the most critical parts of modern LLMs, and it dominates the behavior of LLMs. Thus in this paper, we mainly focus on the gradients of the layers related to the attention mechanism, includ- ing the Query (Q), Key (K), Value (V) projection layers and later output projection layer denoted as Output (O). Considering the LLM contains N attention layers in total, after the loss calculation and Backpropagation, the resulting gradient for each projection layer is a matrix with the same size as its weights, which can be notated as GQ,i, GK,i, GV,i, and GO,i for the corresponding projection layers, where i [0, N 1] represents the index ∈ − of each layer in the LLM. Due to the dramatically large number of layers and parameters of the modern Large language models, it is unrealistic to directly investigate these large gradient matrices. Motivated by the advanced Singular Value Decomposition (SVD) technology used in this area (Biderman et al., 2024; Carlini et al., 2024), we utilize Nuclear Norm (the ℓ1 norm of singular values) to represent the characteristics for the gradient of each layer, especially its strength. SVD is a factorization of a real or complex ma- trix that generalizes the eigendecomposition of a square normal matrix to any m n matrix via an × extension of the polar decomposition. Specifically, e.g., for our gradient matrix GX,i ∈ R[m][×][n], X ∈ Q, K, V, O, it can be decomposed as: { }
#### sX,i = ∥GX,i∥∗ =
#### σ1 Ratio: We define the σ1 ratio as the ratio of the largest singular value to the nuclear norm, which in- dicates the concentration of gradient effects, show- ing how much of the gradient’s magnitude is cap- tured by its most principal component, which to some extent represents the diversity of directions of current gradient [2]:
σ1
rX,i = [σ][1] = (4)
sX,i ∥GX,i∥∗
#### 2.3 Metrics of Gradient Difference
In our experimental analysis, the nuclear norm sX,i of each layer will not be investigated individually, but to investigate the overall dynamic characteris- tics across every layer of the LLM. For simplicity, we notate the nuclear norm value, sX,i, of a specific metric across all the layers as a curve notated as sX . To analyze these gradient results, the visualization of the layer-wise curves is one of the most impor- tant tools to get a qualitative understanding of how the gradients change across the layers. However, quantitative analysis is still required for a better understanding of gradient representations. Gradient-Norm Difference between Layers. Considering that both the fluctuation and scale are important for a gradient curve, we utilize the Mean
2Due to the page limits, we do not include analysis based
###### on the σ1 ratio in our main content, but the curves are all presented in the appendix. These statistics can also be found in our GitHub repository.
-----
#### Absolute Difference, MAD, to represent a gradient curve. Specifically, the MAD of the curve sX is notated as MADsX, which is calculated as:
#### The Math and Commonsense Reasoning datasets are utilized to explore the gradients when LLMs are fine-tuned to learn the reasoning process (slow or fast thinking) and the Wiki Knowledge datasets are utilized to explore the gradients when LLMs are fine-tuned to learn pure knowledge. Due to the slow process of calculating gradients, we randomly sample 500 data instances for each task for our ex- tensive experiments. The scope of our experiments is shown in Table 1. Detailed descriptions of these datasets and corresponding examples can be found in Appendix A.
3.2.1 Math Reasoning
For the category of math, AQuA (Ling et al., 2017), GSM8K (Cobbe et al., 2021), and MATH (Hendrycks et al., 2021) are utilzied. The original ground truth format for AQuA is options from A to E, and for GSM8K is the resulting digits, and additional CoT reasoning paths are provided as well. During our experiments, both the process of learning the original ground truth (fast thinking) and learning the CoT plus ground truth are investi- gated. The ground truth for MATH is the complete solution for the question, which can be regarded as answers with CoT. We select the question types Algebra, Counting, and Geometry in MATH for our experiments. Moreover, to further explore the effects of more detailed reasoning paths (slow thinking), GPT4o is utilized to obtain a detailed version of CoT paths.
3.2.2 Commonsense Reasoning
For the category of commonsense reasoning, four datasets are utilized including StrategyQA (Geva et al., 2021), ECQA (Aggarwal et al., 2021), CREAK (Onoe et al., 2021), and Sensemaking, obtained from the FLAN collection (Longpre et al., 2023). The original ground truth format for Strat- egyQA is options of yes and no, for ECQA is word options, for CREAK is options of yes and no, and for Sensemaking is options of Sentence A or Sentence B. For all these datasets, the correspond- ing human-annotated CoT reasoning paths are pro- vided, and both the process of learning the original ground truth and learning the CoT plus ground truth are investigated in our experiments. Similarly, fur- ther GPT4o-generated CoTs are also investigated.
3.2.3 Wiki Knowledge Learning
This group of tasks represents LLMs learning on pure knowledge-intensive responses without any
#### 1 MADsX =
N 1 −
###### N −1
#### �
|sX,i+1 − sX,i| (5)
###### i=1
#### where N is the total number of layers of the target LLM. MAD is a measure of the average magnitude of change between consecutive points. Unlike standard deviation, MAD focuses on the di- rect differences between successive values without squaring or taking into account the direction (posi- tive or negative change). It is useful for quantifying the overall variability of the data, especially when detecting fluctuations or local changes, which are more important than global trends. Gradient-Norm Difference between Initial- models or Training-responses. In addition to the value presentation for each individual curve, the pair-wise comparison between two curves, across different layers, is also important for our analysis. For this purpose, we use the layer-wise Relative Difference, RD, as our metric. At each layer, the RD between 2 values s[(1)]X,i [and][ s]X,i[(2)] [is calculated] as:
RDX,i = s[(2)]X,i [−] [s]X,i[(1)] (6)
s[(1)]X,i
where s[(1)]X,i [is utilized as the reference value. For] this metric, the projection layer X and the layer index i should be kept the same to ensure this value is meaningful.
### 3 Experimental Setup
#### 3.1 Models
We investigate the gradients for 10 models in- cluding 5 base pre-trained models, Qwen2-1.5B (Yang et al., 2024), gemma-2-2b (Team et al., 2024), Llama-3.1-8B (Dubey et al., 2024), gemma- 2-9b (Team et al., 2024), Llama-2-7b-hf (Touvron et al., 2023) and their instruction-tuned version, Qwen2-1.5B-Instruct, gemma-2-2b-it, Llama-3.1- 8B-Instruct, gemma-2-9b-it, Llama-2-7b-chat-hf. The main illustrations in our paper will be based on the results of Qwen2-1.5B models, results for other models will be in the appendix.
3.2 Datasets
The datasets we use include three categories: Math, Commonsense Reasoning, and Wiki Knowledge.
-----
##### Figure 1: The nuclear norm of gradients across different layers (x-axis) when trained with fast to slow reasoning paths (left to right columns), on (a) AQuA and (b) ECQA datasets. When detailed CoT is utilized for training, the gradient norm tends to be similar across layers (on both math and commonsense reasoning tasks). Note the y-axis scale for None CoT is larger, and the scale for Simplified CoT and Detailed CoT are the same.
#### reasoning process. For Wikipedia knowledge, we categorize Wiki articles into two groups: popular and unpopular based on their total page views in 2021 using the Pageviews Analysis tool[3] provided by Wikimedia’s RESTBase API. After selection, we extract the first 100, 500, and 1000 tokens (respecting paragraph boundaries) from each article for the controlled experiments on the effects of response lengths. For the unpopular wiki pages, we use the least viewed articles in 2021[4]. As the length of unpopular wiki pages is generally short, we take the full articles; if the length exceeds 1024 tokens, we truncate them with respect to sentence boundaries.
### 4 Empirical Analysis
#### 4.1 Math and Commonsense Reasoning Tasks
This section focuses on tasks related to CoT reason- ing, which includes the datasets within the Math and Commonsense Reasoning task type.
4.1.1 Slow vs. Fast Thinking In this section, we investigate the gradient behaviors when LLMs learn the responses with the reasoning process. i.e. the CoT paths. For samples in the MATH dataset, the original responses already contain the necessary steps to solve the question, which we notate as Simplified CoT setting. For the
[3https://pageviews.wmcloud.org](https://pageviews.wmcloud.org)
[4https://en.wikipedia.org/wiki/User:Colin_M/](https://en.wikipedia.org/wiki/User:Colin_M/Least_viewed_articles_in_2021)
###### Least_viewed_articles_in_2021
##### Figure 2: The nuclear norm of gradients across different layers (x-axis) when trained with Correct vs. Irrelevant responses (a) without CoT (fast thinking); (b) with de- tailed CoT (slow thinking), on the AQuA dataset. Gra- dient norm can help identify correct responses when provided Detailed CoT. But this does not extend to gradients without CoT.
#### remaining datasets in these two task types, both the pure answer (resulting digits or options), and short CoT paths are provided, which we notate as None CoT and Simplified Cot settings. These con- figurations can help us to understand the gradients when LLMs learn response with or without CoT, probably revealing the advantages of CoT training. Moreover, the provided CoT paths are all too sim- plified, which might still be hard for LLMs to build the connections from question to CoT path to final answers. Thus we further prompt GPT4o to gen
-----
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 5.76 4.13 3.49 4.42 sQ Simplified 0.89 0.52 0.77 0.69 Detailed 0.23 0.28 0.29 0.28 None 7.20 6.29 8.40 7.06 sK Simplified 1.01 0.56 1.11 0.81 AQuA Detailed 0.22 0.21 0.42 0.27 None 37.29 16.12 3.94 17.32 sV Simplified 5.08 2.14 0.86 2.36 Detailed 1.15 0.62 0.33 0.64 None 23.79 14.35 3.04 12.91 sO Simplified 3.31 2.18 0.63 1.97 Detailed 0.82 0.75 0.29 0.64|sQ|None Simplified Detailed|5.76 4.13 3.49 4.42 0.89 0.52 0.77 0.69 0.23 0.28 0.29 0.28|
||sK|None Simplified Detailed|7.20 6.29 8.40 7.06 1.01 0.56 1.11 0.81 0.22 0.21 0.42 0.27|
||sV|None Simplified Detailed|37.29 16.12 3.94 17.32 5.08 2.14 0.86 2.36 1.15 0.62 0.33 0.64|
||sO|None Simplified Detailed|23.79 14.35 3.04 12.91 3.31 2.18 0.63 1.97 0.82 0.75 0.29 0.64|
|None 8.00 7.01 5.01 6.53 sQ Simplified 1.11 0.70 0.86 0.85 Detailed 0.30 0.37 0.26 0.35 None 11.51 11.07 13.32 11.11 sK Simplified 1.34 1.24 1.01 1.13 ECQA Detailed 0.26 0.29 0.54 0.34 None 59.33 24.83 7.46 27.40 sV Simplified 8.53 3.55 1.66 4.01 Detailed 1.56 0.74 0.48 0.82 None 39.20 19.50 5.12 19.38 sO Simplified 5.56 3.33 1.41 3.22 Detailed 1.00 0.97 0.52 0.85|sQ|None Simplified Detailed|8.00 7.01 5.01 6.53 1.11 0.70 0.86 0.85 0.30 0.37 0.26 0.35|
||sK|None Simplified Detailed|11.51 11.07 13.32 11.11 1.34 1.24 1.01 1.13 0.26 0.29 0.54 0.34|
||sV|None Simplified Detailed|59.33 24.83 7.46 27.40 8.53 3.55 1.66 4.01 1.56 0.74 0.48 0.82|
||sO|None Simplified Detailed|39.20 19.50 5.12 19.38 5.56 3.33 1.41 3.22 1.00 0.97 0.52 0.85|
##### Table 2: The mean absolute differences (MAD) of gra- dient’s nuclear norm for K, Q, V, O projection layers. Early, Middle, Last, and All represent the MAD scores calculated across the early, middle, last, and all the lay- ers across the LLM. A consistent decrease is observed in all layers when LLMs are trained to produce more detailed reasoning paths (slow thinking).
**RD** **Top 5 Different Layer Idx**
**Dataset** **CoT** **Curve**
Average 1 2 3 4 5
_sQ_ 0.81 3 0 1 4 2
AQuA Detailed _sK_ 0.90 3 4 1 0 7
_sV_ 0.81 3 1 4 2 0
_sO_ 0.72 0 1 4 2 3
_sQ_ 0.46 0 3 2 1 4
ECQA Detailed _sK_ 0.50 0 3 1 2 4
_sV_ 0.47 3 1 2 4 0
_sO_ 0.41 0 1 2 3 4
##### Table 3: The average relative difference (RD) and the indexes of the top-5 layers that have the greatest gap between the curves of learning the correct and irrelevant responses. It shows that the earlier layers change more sensitively to irrelevant responses.
#### erate a more detailed version of reasoning paths to compare the effect of different CoT paths, i.e. slow or fast thinking, which is notated as Detailed CoT. The detailed statistical values for the gradient curves on AQuA and ECQA with different CoT settings are provided in Table 2, and the visualiza- tion of the gradient curves on AQuA and ECQA is shown in Figure 1, all the results in this table are based on the Qwen2-1.5B model more results can be found in the appendix. When no CoT reasoning paths are provided for LLMs to learn, fast thinking, the mean absolute differences (MADs) are the largest on all the curves of different projection layers, representing a severe fluctuation of the gra
#### dient scales across all the layers of the LLM, which might cause instability for training (Glorot and Ben- gio, 2010). However, when CoT paths are given, the MADs drop accordingly, especially when the detailed CoT paths are given, slow thinking, as visualized in Figure 1. The large scale indicates that the response distributions that LLMs are going to learn have large discrepancies with what it has learned from the pretraining phase, which might harm the performances of the original pre-trained models (Ghosh et al., 2024; Biderman et al., 2024). Our findings are aligned with the current success of utilizing more detailed CoT reasoning paths or responses for training (Mitra et al., 2023; Li et al., 2023) and provide another perspective on understanding the effectiveness of slow thinking.
4.1.2 Effect of Response Correctness
In this section, we investigate the gradient behaviors when LLMs are learning the correct or irrelevant responses with different reasoning paths. Similarly, for datasets that pure ground truths are given, we investigate the gradient behaviors on three settings: None CoT, Simplified Cot, and Detailed CoT, otherwise only the last two settings can be investigated. In the None CoT setting, we directly shuffle the answers across the dataset and make sure every question has the incorrect answer; In the Simplified Cot and Detailed CoT settings, we split every CoT path into individual sentences, then shuffle the sentences across the dataset. Under this circumstance, each sentence in the response is still complete, while the relation across sentences will be logically wrong, simulating the irrelevant CoT reasoning paths. In these experiments, we try to investigate if the LLMs are able to identify the irrelevant responses during training with slow or fast thinking, reflected by gradient behaviors. The visualizations of LLMs learning on correct and irrelevant responses are presented in Figure 2, which contains 2 settings including (a) None CoT and (b) Detailed CoT. It is widely accepted that LLMs have learned all the knowledge in the pretraining phase (Zhou et al., 2023), so when LLMs are forced to learn responses that conflict with their internal knowledge, more ef- forts should be made (larger gradient) for this false alignment. However, from the visualizations, it can be observed that when no CoT paths are given, the gradient behaviors between learning the correct and nonsense responses are almost identical and their relative difference values on all the projection
|Dataset|CoT|Curve|RD Average|Top 5 Different Layer Idx 1 2 3 4 5|
|---|---|---|---|---|
|AQuA|Detailed|sQ sK sV sO|0.81 0.90 0.81 0.72|3 0 1 4 2 3 4 1 0 7 3 1 4 2 0 0 1 4 2 3|
|ECQA|Detailed|sQ sK sV sO|0.46 0.50 0.47 0.41|0 3 2 1 4 0 3 1 2 4 3 1 2 4 0 0 1 2 3 4|
-----
##### Figure 3: The nuclear norm of gradients across different layers (x-axis) on (a) pre-trained base LLM vs. (b) instruction-finetuned LLM. On both models, training using detailed CoT (slow thinking) reduces the gradient norm and difference across layers. However, the two models’ gradient patterns differ when training with fast thinking (Simplified/None CoT). The y-axis scale of None CoT is greater than that of Simplified CoT and Detailed CoT.
#### layers are less than 0.01. Thus this phenomenon indicates that LLMs can not build the necessary mapping relations from the question to the answer, without explicit reasoning paths being given. On the contrary, when the detailed CoT reasoning paths are provided in the responses, the gradient behaviors will be different, mainly reflected by the larger scale of the gradient. This phenomenon in- dicates that LLMs can to some extent identify that the responses to be learned have potential conflicts with their internal knowledge, thus requiring more energy to adapt to the new nonsense responses. We further investigate if there are specific layers that are directly related to LLMs’ capability to per- ceive irrelevant knowledge, reflected by the larger differences between gradients of correct and irrele- vant responses. As shown in Table 3, the relative difference values of the nuclear norm curves, and the indexes of the top 5 layers that have the greatest gap are presented. The results on the nuclear norm curves show that the earlier layers are more sensi- tive to nonsense responses, which might indicate the potential effects of earlier layers.
4.1.3 Effect of Initial Models
In this section, we compare the gradient behav- iors between pre-trained base LLMs and aligned instructed LLMs. For each instructed LLM, we uti- lize the conversation templates officially provided, avoiding the potential misalignment. In these ex
#### periments, we observe that the instructed LLMs do not have much better performance in identifying the irrelevant responses, evidenced by the minor rel- ative differences between gradient curves obtained from base LLMs and aligned LLMs. However, as shown in Figure 3, although the tendencies are consistent on both types of LLMs that detailed CoT reasoning paths make the scale and fluctuation of gradient smaller and smoother, the gradients on simplified CoT show a large dis- crepancy between the two types of LLMs. This discrepancy means that the instructed LLMs need more energy than the pre-trained LLMs to learn the simplified CoT paths. The phenomenon shows that (1) the distribution of the simplified CoT responses might have non-negligible discrepancies with the instruction datasets used for training this LLM; (2) the behaviors on gradient curves might be used as a measurement of how a specific data sample aligned with the internal knowledge of LLMs, which might be useful for the continued training settings.
4.2 Knowledge Learning Tasks
4.2.1 Effect of Response Length & Popularity
In this section, we investigate the effect on response length and popularity for the knowledge-intensive task. As shown in Figure 4, the left 3 columns repre- sent the scenarios when LLMs are learning popular knowledge with different response lengths and the right-most column represents the scenario when
-----
##### Figure 4: The nuclear norm of gradients across different layers (x-axis) when trained with responses of different lengths (left 3 columns) and unpopular knowledge (rightmost column) on the Wiki knowledge learning (knowledge- intensive) task. Comparing “Short vs. Long”, “Popular vs. Unpopular”, and “Correct vs. Irrelevant” on the two types of models indicates: (1) longer response≠ slower thinking. Unlike Figure 1, solely increasing the response length does not affect the gradient patterns; (2) unpopular knowledge triggers larger gradients; (3) Unlike Figure 2, gradient norm cannot help judge the response’s correctness on knowledge-intensive tasks.
#### LLMs are learning unpopular knowledge. By com- paring the left 3 columns of the figure, it is observed that for the knowledge-intensive task, the lengths of responses do not affect the gradient scales and fluc- tuations. This phenomenon is largely different from the findings observed in the reasoning tasks, where detailed CoTs can largely reduce the gradient scale and fluctuation. This comparison can further verify that the effects of the detailed CoT (slow thinking) in the responses are not caused by the increase in token length but by the detailed reasoning process.
On the contrary, as shown in the right-most fig- ures, when LLMs are learning unpopular knowl- edge, the scales and fluctuation increase dramati- cally, representing LLMs need more effort to learn this unpopular knowledge. This phenomenon is reasonable as popular knowledge occurs frequently from diverse corpus sources, representing an aug- mentation of this knowledge.
#### 4.2.2 Effect of Response Correctness & Initial Models
Then we compare the curve differences when LLMs are learning the correct or nonsense re- sponses as shown in Figure 4. The almost identical curves between (a) and (c) show that the pre-trained LLM is not able to identify the nonsense knowl- edge that it is learning, and the curves between (b) and (d) show that the instructed LLM also lacks this capability. This phenomenon is also different from the findings observed in the reasoning tasks, where LLMs are able to identify the nonsense rea- soning paths reflected by the increase of gradient scales and fluctuation. As for the effect of the instructed LLMs, com- paring between (a) (b), and (c) (d), a consistent increase in the gradient scales and fluctuation is observed especially on the unpopular knowledge. This phenomenon indicates that it is harder for in
-----
#### structed LLMs to learn knowledge that is new or not well-learned during the pre-training phase.
### 5 Conclusion
#### Our study reveals the significant differences in gra- dient behaviors between fast and slow thinking training methodologies in LLMs, offering insights into how training dynamics can influence these models. Specifically, we observe that slow thinking leads to stable gradient norms of different layers, while fast thinking results in larger gradients and fluctuation across layers. Moreover, the gradient of slow thinking helps distinguish correct responses from irrelevant responses, while without CoT, the gradient patterns of the two types of responses are similar. The above observations on reasoning tasks cannot be extended to knowledge-learning tasks, where simply increasing response length does not show similar gradient patterns as slow thinking.
### Limitations
#### Due to the page limits, only a small proportion of results can be presented, which might weaken the findings of the paper. However, we try to include as many results as possible in the appendix through visualization and statistical results, hoping to pro- vide further insights for the community. Moreover, the analysis in this paper focuses mainly on the strength of the layer-wise gradients, maybe more metrics can be used.
### References
##### Shourya Aggarwal, Divyanshu Mandowara, Vishwa- jeet Agrawal, Dinesh Khandelwal, Parag Singla, and Dinesh Garg. 2021. Explanations for Common- senseQA: New Dataset and Models. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3050–3065, Online. Association for Computational Linguistics.
Guillaume Alain and Yoshua Bengio. 2017. Under- standing intermediate layers using linear classifier probes.
Dan Biderman, Jacob Portes, Jose Javier Gonzalez Ortiz, Mansheej Paul, Philip Greengard, Connor Jennings, Daniel King, Sam Havens, Vitaliy Chiley, Jonathan Frankle, Cody Blakeney, and John Patrick Cunning- ham. 2024. LoRA learns less and forgets less. Trans- actions on Machine Learning Research. Featured Certification.
##### Nicholas Carlini, Daniel Paleka, Krishnamurthy Dj Dvi- jotham, Thomas Steinke, Jonathan Hayase, A Feder Cooper, Katherine Lee, Matthew Jagielski, Milad Nasr, Arthur Conmy, et al. 2024. Stealing part of a production language model. arXiv preprint arXiv:2403.06634.
Xiaodong Chen, Yuxuan Hu, Jing Zhang, Yanling Wang, Cuiping Li, and Hong Chen. 2024. Streamlining redundant layers to compress large language models. Preprint, arXiv:2403.19135.
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. 2021. Training verifiers to solve math word prob- lems. Preprint, arXiv:2110.14168.
Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, Anirudh Goyal, Anthony Hartshorn, Aobo Yang, Archi Mitra, Archie Sravankumar, Artem Korenev, Arthur Hinsvark, Arun Rao, Aston Zhang, Aurelien Rodriguez, Austen Gregerson, Ava Spataru, Bap- tiste Roziere, Bethany Biron, Binh Tang, Bobbie Chern, Charlotte Caucheteux, Chaya Nayak, Chloe Bi, Chris Marra, Chris McConnell, Christian Keller, Christophe Touret, Chunyang Wu, Corinne Wong, Cristian Canton Ferrer, Cyrus Nikolaidis, Damien Al- lonsius, Daniel Song, Danielle Pintz, Danny Livshits, David Esiobu, Dhruv Choudhary, Yuchen Hao, Yundi Qian, Yuzi He, Zach Rait, Zachary DeVito, Zef Rosnbrick, Zhaoduo Wen, Zhenyu Yang, and Zhiwei Zhao. 2024. The llama 3 herd of models. Preprint, arXiv:2407.21783.
Siqi Fan, Xin Jiang, Xiang Li, Xuying Meng, Peng Han, Shuo Shang, Aixin Sun, Yequan Wang, and Zhongyuan Wang. 2024. Not all layers of llms are necessary during inference. Preprint, arXiv:2403.02181.
Chongyang Gao, Kezhen Chen, Jinmeng Rao, Baochen Sun, Ruibo Liu, Daiyi Peng, Yawen Zhang, Xi- aoyuan Guo, Jie Yang, and VS Subrahmanian. 2024. Higher layers need more lora experts. Preprint, arXiv:2402.08562.
Mor Geva, Daniel Khashabi, Elad Segal, Tushar Khot, Dan Roth, and Jonathan Berant. 2021. Did aristotle use a laptop? a question answering benchmark with implicit reasoning strategies. Transactions of the Association for Computational Linguistics, 9:346– 361.
Sreyan Ghosh, Chandra Kiran Reddy Evuru, Sonal Ku- mar, Ramaneswaran S, Deepali Aneja, Zeyu Jin, Ra- mani Duraiswami, and Dinesh Manocha. 2024. A closer look at the limitations of instruction tuning. Preprint, arXiv:2402.05119.
Xavier Glorot and Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural net- works. In Proceedings of the thirteenth international
-----
##### conference on artificial intelligence and statistics, pages 249–256. JMLR Workshop and Conference Proceedings.
Wes Gurnee, Neel Nanda, Matthew Pauly, Katherine Harvey, Dmitrii Troitskii, and Dimitris Bertsimas. 2023. Finding neurons in a haystack: Case stud- ies with sparse probing. Transactions on Machine Learning Research.
Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2021. Measuring massive multitask language under- standing. In International Conference on Learning Representations.
Lei Huang, Weijiang Yu, Weitao Ma, Weihong Zhong, Zhangyin Feng, Haotian Wang, Qianglong Chen, Weihua Peng, Xiaocheng Feng, Bing Qin, and Ting Liu. 2023. A survey on hallucination in large lan- guage models: Principles, taxonomy, challenges, and open questions. Preprint, arXiv:2311.05232.
Yue Huang, Lichao Sun, Haoran Wang, Siyuan Wu, Qi- hui Zhang, Yuan Li, Chujie Gao, Yixin Huang, Wen- han Lyu, Yixuan Zhang, Xiner Li, Zhengliang Liu, Yixin Liu, Yijue Wang, Zhikun Zhang, Bertie Vidgen, Bhavya Kailkhura, Caiming Xiong, Chaowei Xiao, Chunyuan Li, Eric Xing, Furong Huang, Hao Liu, Heng Ji, Hongyi Wang, Huan Zhang, Huaxiu Yao, Manolis Kellis, Marinka Zitnik, Meng Jiang, Mohit Bansal, James Zou, Jian Pei, Jian Liu, Jianfeng Gao, Jiawei Han, Jieyu Zhao, Jiliang Tang, Jindong Wang, Joaquin Vanschoren, John Mitchell, Kai Shu, Kaidi Xu, Kai-Wei Chang, Lifang He, Lifu Huang, Michael Backes, Neil Zhenqiang Gong, Philip S. Yu, Pin-Yu Chen, Quanquan Gu, Ran Xu, Rex Ying, Shuiwang Ji, Suman Jana, Tianlong Chen, Tianming Liu, Tianyi Zhou, William Wang, Xiang Li, Xiangliang Zhang, Xiao Wang, Xing Xie, Xun Chen, Xuyu Wang, Yan Liu, Yanfang Ye, Yinzhi Cao, Yong Chen, and Yue Zhao. 2024. Trustllm: Trustworthiness in large lan- guage models. Preprint, arXiv:2401.05561.
Mingyu Jin, Qinkai Yu, Jingyuan Huang, Qingcheng Zeng, Zhenting Wang, Wenyue Hua, Haiyan Zhao, Kai Mei, Yanda Meng, Kaize Ding, Fan Yang, Mengnan Du, and Yongfeng Zhang. 2024. Ex- ploring concept depth: How large language models acquire knowledge at different layers? Preprint, arXiv:2404.07066.
Tianjie Ju, Weiwei Sun, Wei Du, Xinwei Yuan, Zhaochun Ren, and Gongshen Liu. 2024. How large language models encode context knowledge? a layer- wise probing study. In Proceedings of the 2024 Joint International Conference on Computational Linguis- tics, Language Resources and Evaluation (LREC- COLING 2024), pages 8235–8246, Torino, Italia. ELRA and ICCL.
Satyapriya Krishna, Jiaqi Ma, Dylan Slack, Asma Ghan- deharioun, Sameer Singh, and Himabindu Lakkaraju. 2024. Post hoc explanations of language models can improve language models. Advances in Neural Information Processing Systems, 36.
##### Ming Li, Jiuhai Chen, Lichang Chen, and Tianyi Zhou. 2024a. Can LLMs speak for diverse people? tuning LLMs via debate to generate controllable controver- sial statements. In Findings of the Association for Computational Linguistics ACL 2024, pages 16160– 16176, Bangkok, Thailand and virtual meeting. As- sociation for Computational Linguistics.
Ming Li, Lichang Chen, Jiuhai Chen, Shwai He, Jiuxi- ang Gu, and Tianyi Zhou. 2024b. Selective reflection- tuning: Student-selected data recycling for LLM instruction-tuning. In Findings of the Association for Computational Linguistics ACL 2024, pages 16189– 16211, Bangkok, Thailand and virtual meeting. As- sociation for Computational Linguistics.
Ming Li, Lichang Chen, Jiuhai Chen, Shwai He, and Tianyi Zhou. 2023. Reflection-tuning: Recycling data for better instruction-tuning. In NeurIPS 2023 Workshop on Instruction Tuning and Instruction Fol- lowing.
Ming Li, Yong Zhang, Shwai He, Zhitao Li, Hongyu Zhao, Jianzong Wang, Ning Cheng, and Tianyi Zhou. 2024c. Superfiltering: Weak-to-strong data filtering for fast instruction-tuning. In Proceedings of the 62nd Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 14255–14273, Bangkok, Thailand. Association for Computational Linguistics.
Ming Li, Yong Zhang, Zhitao Li, Jiuhai Chen, Lichang Chen, Ning Cheng, Jianzong Wang, Tianyi Zhou, and Jing Xiao. 2024d. From quantity to quality: Boosting LLM performance with self-guided data selection for instruction tuning. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 7595–7628, Mexico City, Mexico. Association for Computational Linguistics.
Shen Li, Liuyi Yao, Lan Zhang, and Yaliang Li. 2024e.
Safety layers in aligned large language models: The key to llm security. Preprint, arXiv:2408.17003.
Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blun- som. 2017. Program induction by rationale genera- tion: Learning to solve and explain algebraic word problems. In Proceedings of the 55th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 158–167, Vancouver, Canada. Association for Computational Linguistics.
Shayne Longpre, Le Hou, Tu Vu, Albert Webson, Hyung Won Chung, Yi Tay, Denny Zhou, Quoc V. Le, Barret Zoph, Jason Wei, and Adam Roberts. 2023. The flan collection: Designing data and methods for effective instruction tuning. Preprint, arXiv:2301.13688.
Xin Men, Mingyu Xu, Qingyu Zhang, Bingning Wang, Hongyu Lin, Yaojie Lu, Xianpei Han, and Weipeng Chen. 2024. Shortgpt: Layers in large language mod- els are more redundant than you expect. Preprint, arXiv:2403.03853.
-----
##### Swaroop Mishra, Daniel Khashabi, Chitta Baral, and Hannaneh Hajishirzi. 2021. Cross-task generaliza- tion via natural language crowdsourcing instructions. arXiv preprint arXiv:2104.08773.
Arindam Mitra, Luciano Del Corro, Shweti Mahajan, Andres Codas, Clarisse Simoes, Sahaj Agrawal, Xuxi Chen, Anastasia Razdaibiedina, Erik Jones, Kriti Ag- garwal, Hamid Palangi, Guoqing Zheng, Corby Ros- set, Hamed Khanpour, and Ahmed Awadallah. 2023. Orca 2: Teaching small language models how to rea- son. Preprint, arXiv:2311.11045.
Yasumasa Onoe, Michael J. Q. Zhang, Eunsol Choi, and Greg Durrett. 2021. Creak: A dataset for com- monsense reasoning over entity knowledge. Preprint, arXiv:2109.01653.
Cynthia Rudin, Chaofan Chen, Zhi Chen, Haiyang Huang, Lesia Semenova, and Chudi Zhong. 2022. Interpretable machine learning: Fundamental prin- ciples and 10 grand challenges. Statistic Surveys, 16:1–85.
Chandan Singh, Jeevana Priya Inala, Michel Galley, Rich Caruana, and Jianfeng Gao. 2024. Rethinking interpretability in the era of large language models. arXiv preprint arXiv:2402.01761.
Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. 2023. Stanford alpaca: An instruction-following llama model. https:// github.com/tatsu-lab/stanford_alpaca.
Gemma Team, Morgane Riviere, Shreya Pathak, Pier Giuseppe Sessa, Cassidy Hardin, Surya Bhupati- raju, Léonard Hussenot, Thomas Mesnard, Bobak Shahriari, Alexandre Ramé, Johan Ferret, Peter Liu, Pouya Tafti, Abe Friesen, Michelle Casbon, Sabela Ramos, Ravin Kumar, Charline Le Lan, Sammy Jerome, Anton Tsitsulin, Nino Vieillard, Piotr Stanczyk, Sertan Girgin, Nikola Momchev, Matt Hoffman, Shantanu Thakoor, Jean-Bastien Grill, Behnam Neyshabur, Olivier Bachem, Alanna Walton, Aliaksei Severyn, Alicia Parrish, Aliya Ahmad, Allen Hutchison, Alvin Abdagic, Amanda Carl, Amy Shen, Andy Brock, Andy Coenen, Anthony Laforge, Anto- nia Paterson, Ben Bastian, Bilal Piot, Noah Fiedel, Armand Joulin, Kathleen Kenealy, Robert Dadashi, and Alek Andreev. 2024. Gemma 2: Improving open language models at a practical size. Preprint, arXiv:2408.00118.
Hugo Touvron, Louis Martin, Kevin Stone, Peter Al- bert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, An- thony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura,
##### Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Di- ana Liskovich, Yinghai Lu, Yuning Mao, Xavier Mar- tinet, Todor Mihaylov, Pushkar Mishra, Igor Moly- bog, Yixin Nie, Andrew Poulton, Jeremy Reizen- stein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subrama- nian, Xiaoqing Ellen Tan, Binh Tang, Ross Tay- lor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Ro- driguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023. Llama 2: Open foundation and fine- tuned chat models. Preprint, arXiv:2307.09288.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, volume 30. Curran Associates, Inc.
Lean Wang, Lei Li, Damai Dai, Deli Chen, Hao Zhou, Fandong Meng, Jie Zhou, and Xu Sun. 2023a. Label words are anchors: An information flow perspective for understanding in-context learning. In Proceed- ings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 9840–9855, Singapore. Association for Computational Linguis- tics.
Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, and Hannaneh Hajishirzi. 2023b. Self-instruct: Aligning language models with self-generated instructions. In Proceed- ings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 13484–13508, Toronto, Canada. Association for Computational Linguistics.
Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V Le. 2022. Finetuned language mod- els are zero-shot learners. In International Confer- ence on Learning Representations.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and Denny Zhou. 2023. Chain-of-thought prompting elic- its reasoning in large language models. Preprint, arXiv:2201.11903.
Zhengxuan Wu, Atticus Geiger, Thomas Icard, Christo- pher Potts, and Noah Goodman. 2024. Interpretabil- ity at scale: Identifying causal mechanisms in alpaca. Advances in Neural Information Processing Systems, 36.
Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, and Daxin Jiang. 2023. Wizardlm: Empowering large language models to follow complex instructions. Preprint, arXiv:2304.12244.
Xiaohan Xu, Ming Li, Chongyang Tao, Tao Shen, Reynold Cheng, Jinyang Li, Can Xu, Dacheng
-----
##### Tao, and Tianyi Zhou. 2024. A survey on knowl- edge distillation of large language models. ArXiv, abs/2402.13116.
An Yang, Baosong Yang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Zhou, Chengpeng Li, Chengyuan Li, Dayiheng Liu, Fei Huang, Guanting Dong, Hao- ran Wei, Huan Lin, Jialong Tang, Jialin Wang, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Ma, Jianxin Yang, Jin Xu, Jingren Zhou, Jinze Bai, Jinzheng He, Junyang Lin, Kai Dang, Keming Lu, Ke- qin Chen, Kexin Yang, Mei Li, Mingfeng Xue, Na Ni, Pei Zhang, Peng Wang, Ru Peng, Rui Men, Ruize Gao, Runji Lin, Shijie Wang, Shuai Bai, Sinan Tan, Tianhang Zhu, Tianhao Li, Tianyu Liu, Wenbin Ge, Xiaodong Deng, Xiaohuan Zhou, Xingzhang Ren, Xinyu Zhang, Xipin Wei, Xuancheng Ren, Xuejing Liu, Yang Fan, Yang Yao, Yichang Zhang, Yu Wan, Yunfei Chu, Yuqiong Liu, Zeyu Cui, Zhenru Zhang, Zhifang Guo, and Zhihao Fan. 2024. Qwen2 techni- cal report. Preprint, arXiv:2407.10671.
Tian Ye, Zicheng Xu, Yuanzhi Li, and Zeyuan Allen- Zhu. 2024. Physics of language models: Part 2.1, grade-school math and the hidden reasoning process. Preprint, arXiv:2407.20311.
Shengyu Zhang, Linfeng Dong, Xiaoya Li, Sen Zhang, Xiaofei Sun, Shuhe Wang, Jiwei Li, Runyi Hu, Tian- wei Zhang, Fei Wu, and Guoyin Wang. 2023. In- struction tuning for large language models: A survey. Preprint, arXiv:2308.10792.
Haiyan Zhao, Hanjie Chen, Fan Yang, Ninghao Liu, Huiqi Deng, Hengyi Cai, Shuaiqiang Wang, Dawei Yin, and Mengnan Du. 2023a. Explainability for large language models: A survey. Preprint, arXiv:2309.01029.
Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Be- ichen Zhang, Junjie Zhang, Zican Dong, Yifan Du, Chen Yang, Yushuo Chen, Zhipeng Chen, Jinhao Jiang, Ruiyang Ren, Yifan Li, Xinyu Tang, Zikang Liu, Peiyu Liu, Jian-Yun Nie, and Ji-Rong Wen. 2023b. A survey of large language models. Preprint, arXiv:2303.18223.
Chunting Zhou, Pengfei Liu, Puxin Xu, Srini Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, Susan Zhang, Gargi Ghosh, Mike Lewis, Luke Zettlemoyer, and Omer Levy. 2023. Lima: Less is more for alignment. Preprint, arXiv:2305.11206.
Andy Zou, Long Phan, Sarah Chen, James Campbell, Phillip Guo, Richard Ren, Alexander Pan, Xuwang Yin, Mantas Mazeika, Ann-Kathrin Dombrowski, Shashwat Goel, Nathaniel Li, Michael J. Byun, Zifan Wang, Alex Mallen, Steven Basart, Sanmi Koyejo, Dawn Song, Matt Fredrikson, J. Zico Kolter, and Dan Hendrycks. 2023. Representation engineering: A top-down approach to ai transparency. Preprint, arXiv:2310.01405.
-----
### Table of Contents
#### A Data Examples 14
B Results on gemma-2-2b 26
B.1 Pre-trained LLM on Correct Re- sponses . . . . . . . . . . . . . . 26
B.1.1 Reasoning Tasks . . . . . 26
B.1.2 Wiki Tasks . . . . . . . . 27
B.2 Pre-trained LLM on Irrelevant Re- sponses . . . . . . . . . . . . . . 27
B.2.1 Reasoning Tasks . . . . . 27
B.2.2 Wiki Tasks . . . . . . . . 27
B.3 Instructed LLM on Correct Re- sponses . . . . . . . . . . . . . . 39
B.3.1 Reasoning Tasks . . . . . 39
B.3.2 Wiki Tasks . . . . . . . . 40
B.4 Instructed LLM on Irrelevant Re- sponses . . . . . . . . . . . . . . 40
B.4.1 Reasoning Tasks . . . . . 40
B.4.2 Wiki Tasks . . . . . . . . 40
C Results on Llama-3.1-8B 53
C.1 Pre-trained LLM on Correct Re- sponses . . . . . . . . . . . . . . 53
C.1.1 Reasoning Tasks . . . . . 53
C.1.2 Wiki Tasks . . . . . . . . 54
C.2 Pre-trained LLM on Irrelevant Re- sponses . . . . . . . . . . . . . . 54
C.2.1 Reasoning Tasks . . . . . 54
C.2.2 Wiki Tasks . . . . . . . . 54
C.3 Instructed LLM on Correct Re- sponses . . . . . . . . . . . . . . 67
C.3.1 Reasoning Tasks . . . . . 67
C.3.2 Wiki Tasks . . . . . . . . 68
C.4 Instructed LLM on Irrelevant Re- sponses . . . . . . . . . . . . . . 68
C.4.1 Reasoning Tasks . . . . . 68
C.4.2 Wiki Tasks . . . . . . . . 68
D Results on Qwen2-1.5B 81
D.1 Pre-trained LLM on Correct Re- sponses . . . . . . . . . . . . . . 81
D.1.1 Reasoning Tasks . . . . . 81
D.1.2 Wiki Tasks . . . . . . . . 82
D.2 Pre-trained LLM on Irrelevant Re- sponses . . . . . . . . . . . . . . 82
D.2.1 Reasoning Tasks . . . . . 82
D.2.2 Wiki Tasks . . . . . . . . 82
D.3 Instructed LLM on Correct Re- sponses . . . . . . . . . . . . . . 95
D.3.1 Reasoning Tasks . . . . . 95
#### D.3.2 Wiki Tasks . . . . . . . . 96
D.4 Instructed LLM on Irrelevant Re- sponses . . . . . . . . . . . . . . 96
D.4.1 Reasoning Tasks . . . . . 96
D.4.2 Wiki Tasks . . . . . . . . 96
E Results on Llama-2-7B-hf 109
E.1 Pre-trained LLM on Correct Re- sponses . . . . . . . . . . . . . . 109
E.1.1 Reasoning Tasks . . . . . 109
E.1.2 Wiki Tasks . . . . . . . . 110
E.2 Pre-trained LLM on Irrelevant Re- sponses . . . . . . . . . . . . . . 110
E.2.1 Reasoning Tasks . . . . . 110
E.2.2 Wiki Tasks . . . . . . . . 110
E.3 Instructed LLM on Correct Re- sponses . . . . . . . . . . . . . . 123
E.3.1 Reasoning Tasks . . . . . 123
E.3.2 Wiki Tasks . . . . . . . . 124
E.4 Instructed LLM on Irrelevant Re- sponses . . . . . . . . . . . . . . 124
E.4.1 Reasoning Tasks . . . . . 124
E.4.2 Wiki Tasks . . . . . . . . 124
-----
### A Data Examples
#### In this section we provide examples for each of our datasets: AQuA (Figure 5), CREAK (Figure 6), ECQA (Figure 7), GSM8K (Figure 8), Sensemak- ing (Figure 9), StrategyQA (Figure 10), MATH- Algebra (Figure 11), MATH-Counting (Figure 12), MATH-Geometry (Figure 13), Wiki Popular (Fig- ure 14), and Wiki Unpopular (Figure 15).
-----
#### Question: A 600 meter long train crosses a signal post in 40 seconds. How long will it take to cross a 3 kilometer long bridge, at the same speed? Options: (A) 4 min, (B) 2 min, (C) 8 min, (D) 9 min, (E) 5 min
[None CoT] (A)
[Simplified CoT] S = 600/40 = 15 mps S = 3600/15 = 240 sec = 4 min The final answer is (A)
[Detailed CoT] To find the time it will take for a 600-meter-long train to cross a 3-kilometer-long bridge, we need to follow these steps: Step 1: Determine the speed of the train - The train crosses a signal post (which can be treated as a point) in 40 seconds. - This means the train travels its own length (600 meters) in 40 seconds. Calculate the speed of the train:
Speed = [Distance]
Time
#### Speed = [600][ meters]
40 seconds Speed = 15 meters/second
Step 2: Determine the total distance to be covered when crossing the bridge - The length of the bridge is 3 kilometers, which is 3000 meters. - When crossing the bridge, the train needs to cover the length of the bridge plus its own length. - Total distance to be covered = Length of the bridge + Length of the train
Total distance = 3000 meters + 600 meters
Total distance = 3600 meters
Step 3: Calculate the time to cross the bridge - Using the speed calculated earlier, we now find the time to cover 3600 meters. Time = [Distance]
Speed
#### 3600 meters Time =
15 meters/second Time = 240 seconds
Step 4: Convert the time into minutes - There are 60 seconds in a minute.
240 seconds Time in minutes =
60 seconds/minute
Time in minutes = 4 minutes
Thus, it will take the train 4 minutes to cross the 3-kilometer-long bridge. Conclusion: The correct answer is (A) 4 minutes.
##### Figure 5: An example from AQuA dataset.
-----
#### Question: Which of the following is not a leap year ? Options: (A) 700; (B) 800; (C) 900; (D) 1000; (E) 1200
[None CoT] (A)
[Simplified CoT] The century divisible by 400 is The final answer is (A)
[Detailed CoT] To determine which of the given years is not a leap year, we need to understand the rules for leap years. Here’s a step-by-step breakdown: Leap Year Rules: 1. A year is a leap year if it is divisible by 4. 2. However, if the year is also divisible by 100, it is not a leap year, unless: 3. The year is divisible by 400, in which case it is a leap year. We will apply these rules to each of the given years. Year 700 1. Check divisibility by 4: 700 4 = 175 (no remainder, so divisible by 4). 2. Check ÷ divisibility by 100: 700 100 = 7 (no remainder, so divisible by 100). 3. Check divisibility by 400: ÷ 700 400 = 1.75 (remainder exists, so not divisible by 400). - Since 700 is divisible by 100 but not by ÷ 400, it is **not a leap year**. Year 800 1. Check divisibility by 4: 800 4 = 200 (no remainder, so divisible by 4). 2. Check divisibility ÷ by 100: 800 100 = 8 (no remainder, so divisible by 100). 3. Check divisibility by 400: 800 400 = 2 ÷ ÷ (no remainder, so divisible by 400). - Since 800 is divisible by 400, it is **a leap year**. Year 900 1. Check divisibility by 4: 900 4 = 225 (no remainder, so divisible by 4). 2. Check ÷ divisibility by 100: 900 100 = 9 (no remainder, so divisible by 100). 3. Check divisibility by 400: ÷ 900 400 = 2.25 (remainder exists, so not divisible by 400). - Since 900 is divisible by 100 but not by ÷ 400, it is **not a leap year**. Year 1000 1. Check divisibility by 4: 1000 4 = 250 (no remainder, so divisible by 4). 2. Check ÷ divisibility by 100: 1000 100 = 10 (no remainder, so divisible by 100). 3. Check divisibility by 400: ÷ 1000 400 = 2.5 (remainder exists, so not divisible by 400). - Since 1000 is divisible by 100 but not by ÷ 400, it is **not a leap year**. Year 1200 1. Check divisibility by 4: 1200 4 = 300 (no remainder, so divisible by 4). 2. Check ÷ divisibility by 100: 1200 100 = 12 (no remainder, so divisible by 100). 3. Check divisibility by 400: ÷ 1200 400 = 3 (no remainder, so divisible by 400). - Since 1200 is divisible by 400, it is **a leap year**. ÷ Conclusion From the analysis above, the years that are not leap years are: - 700 - 900 - 1000 Among the options provided, the incorrect assumption was identifying which specific year is being asked about in question format that expects a singular correct answer. However, strictly adhering to the problem formatting where typically one answer is sought:
- Since several options fit correctly and given the options stated as format; (A), (C), (D) are non-leap years. Proceeding usually with first in order and options mentioned logically (A)700 provided correct inference. Therefore: **Option (A) 700 is not a leap year.**
##### Figure 6: An example from CREAK dataset.
-----
#### Question: The ability to remember things is a sign of what? Options: - neuronal network - memories - intelligence - recall - learning about
[None CoT] intelligence
[Simplified CoT] Intelligence is an ability. The ability to remember things is a sign of intelligence. The final answer is intelligence
[Detailed CoT] To determine what the ability to remember things is most closely a sign of, it’s helpful to analyze each option: 1. **Neuronal Network**: - The neuronal network refers to the structure and connections between neurons in the brain. While crucial for processing and storing information, the term itself does not directly signify the ability to remember. Rather, it describes the brain’s architecture which facilitates functions including memory. 2. **Memories**: - Memories are the end result or the content of what is remembered. While they are directly linked to remembrance, the ability to have memories is an outcome of the memory process, not an indicator of the capacity to remember. 3. **Intelligence**: - Intelligence involves a broader range of cognitive abilities including problem- solving, reasoning, and understanding. Although memory is a component of intelligence, particularly in areas like working memory and long-term recall, intelligence encompasses far more than just the ability to remember. 4. **Recall**: - Recall is the process of retrieving information from memory. The ability to remember is a direct manifestation of recall, as it involves the capacity to retrieve stored information when needed. 5. **Learning About**: - Learning about refers to the process of acquiring new information or skills. While learning is related to memory, having the ability to remember indicates that the learning process has been successful, but it does not directly signify the capacity to remember. Given this analysis, the most accurate answer is: **Recall**. The ability to remember things is a sign of recall because recall specifically refers to the cognitive process of retrieving information from memory.
##### Figure 7: An example from ECQA dataset.
-----
#### Question: Linda was going to pass out homemade cookies to each of her 24 classmates on the last day of school. She wanted to give each student 10 cookies and wanted to make chocolate chip cookies and oatmeal raisin cookies. Each cookie recipe made exactly 4 dozen cookies. She was able to make 2 batches of chocolate chip cookies before running out of chocolate chips. She made 1 batch of oatmeal raisin cookies. How many more batches of cookies does Linda need to bake?
[None CoT] 2
[Simplified CoT] She has 24 classmates and wants to give each 10 cookies, so she needs to bake 24 * 10 = 240 cookies. Each batch makes 4 dozen cookies so that means each batch has 4 * 12 = 48 cookies. She baked 2 batches of chocolate chip and one batch of oatmeal raisin so 2 + 1 = 3 batches. We know that each batch has 48 cookies so 3 * 48 = 144 cookies have been baked. She needs to have 240 cookies and has baked 144 so 240 - 144 = 96 cookies are needed. Again, each batch makes 48 cookies and we need 96 more cookies so 96 / 48 = 2 more batches are needed. The final answer is 2
[Detailed CoT] To determine how many more batches of cookies Linda needs to bake, let’s proceed with the following steps: 1. **Calculate the total number of cookies Linda needs:** Linda wants to give each of her 24 classmates 10 cookies.
Total cookies needed = 24 classmates 10 cookies per classmate = 240 cookies ×
2. **Determine the number of cookies Linda has already made:** - Each batch of cookies makes exactly 4 dozen cookies. - 1 dozen = 12 cookies, so 4 dozen = 4 12 = 48 cookies per batch. ×
- Linda made 2 batches of chocolate chip cookies:
Cookies from chocolate chip cookies = 2 48 = 96 cookies ×
- Linda made 1 batch of oatmeal raisin cookies:
Cookies from oatmeal raisin cookies = 1 48 = 48 cookies ×
- Therefore, the total number of cookies she has made is:
Total cookies made = 96 chocolate chip cookies + 48 oatmeal raisin cookies = 144 cookies
3. **Determine the additional number of cookies needed:** - Total cookies needed are 240. - Cookies already made are 144. - Thus, the additional cookies needed are:
Additional cookies needed = 240 144 = 96 cookies −
4. **Calculate the number of additional batches needed:** - Each batch makes 48 cookies. - To find out how many more batches are needed to produce 96 cookies:
96 cookies Number of additional batches =
48 cookies per batch [= 2][ batches]
Therefore, Linda needs to bake **2 more batches** of cookies to have enough for her classmates.
##### Figure 8: An example from GSM8K dataset.
-----
#### Question: Choose the sentence that goes against common sense. Options: - Sentence A: "Christopher Columbus sailed to North America with three ships: the Toyota, the Pontiac, and the Mazda." - Sentence B: "Christopher Columbus sailed to North America with three ships: the Santa Maria, the Pinta, and the Nina."
[None CoT] Sentence A
[Simplified CoT] The Santa Maria, the Pinta, and the Nina are sailing ships, while the Toyota, the Pontiac, and the Mazda are cars. The final answer is Sentence A
[Detailed CoT] To determine which sentence goes against common sense, we’ll evaluate the accuracy and historical correctness of the information provided in each option. **Sentence A: "Christopher Columbus sailed to North America with three ships: the Toyota, the Pontiac, and the Mazda."** 1. **Names of the Ships**: The names mentioned are "Toyota," "Pontiac," and "Mazda." - **Analysis**: Toyota and Mazda are names of modern car manufacturing companies, and Pontiac is also associated with an automobile brand. These names have no historical connection to the period of Christopher Columbus or the names of ships used during his voyages. 2. **Historical Accuracy**: Christopher Columbus’s voyages took place in the 15th century (1492 specifically for his first voyage) when such modern terms and companies did not exist. **Sentence B: "Christopher Columbus sailed to North America with three ships: the Santa Maria, the Pinta, and the Nina."** 1. **Names of the Ships**: The names mentioned are "Santa Maria," "Pinta," and "Nina." - **Analysis**: These are historically accurate names of the three ships that Christopher Columbus used on his first voyage across the Atlantic Ocean in 1492. 2. **Historical Accuracy**: This information is consistent with the well-documented historical facts about Columbus’s voyage. Conclusion: **Sentence A** goes against common sense because it lists modern car brands ("Toyota," "Pontiac," and "Mazda") as the names of the ships used by Christopher Columbus in the 15th century, which is factually incorrect and anachronistic. **Reasoning Steps Summarized**: 1. Evaluate the historical context and time period relevant to Christo- pher Columbus’s voyages. 2. Identify the historically accurate ship names used by Columbus. 3. Compare the provided ship names in each sentence against the known historical facts. 4. Determine that modern car brand names in Sentence A are out of place and nonsensical in the given context.
##### Figure 9: An example from Sensemaking dataset.
-----
#### Question: Could Durian cause someone’s stomach to feel unwell?
[None CoT] yes
[Simplified CoT] Durian has a pungent odor that many people describe as being similar to feet and onions. Unpleasant smells can make people feel nauseous. The final answer is yes
[Detailed CoT]
Yes, durian can potentially cause someone’s stomach to feel unwell. Here are some detailed reasoning steps to understand why this might happen: 1. **Digestive Sensitivity** Different people have varying levels of sensitivity to foods. Durian, being a rich and complex fruit, might be harder to digest for some individuals. **Reasoning:** - Durian has a high fiber content which, while beneficial in moderate amounts, can cause gas, bloating, or uncomfortable bowel movements in those not accustomed to high-fiber foods. - High-fat content can also be an issue for individuals with sensitivities or specific digestive conditions, as it might slow down the digestive process. 2. **High Sulfur Content** Durian contains sulfur compounds, which contribute to its strong smell and unique flavor. **Reasoning:** - Sulfur compounds can be challenging for some people to digest and might lead to stomach discomfort, including bloating and gas. - People with sulfur sensitivities or related conditions may experience more pronounced gastrointestinal distress. 3. **Fermentation and Bloating** Durian contains sugars that can ferment in the gut. **Reasoning:** - The process of fermentation can produce gas, which might lead to bloating and discomfort. - If someone has an imbalance in their gut bacteria, the fermentation of these sugars could exacerbate feelings of unwellness in the stomach. 4. **Overconsumption** Eating too much of any food, including durian, can cause digestive discomfort. **Reasoning:** - When consumed in large amounts, the high fiber and fat content in durian can overwhelm the digestive system, leading to symptoms such as stomachache, bloating, and gas. - Overeating durian could also result in high caloric intake, leading to a feeling of fullness and discomfort. 5. **Preexisting Conditions** Individuals with certain preexisting health conditions might be more susceptible to stomach discomfort from durian. **Reasoning:** - Conditions like Irritable Bowel Syndrome (IBS), gastritis, or other gastrointestinal disorders can make the stomach more sensitive to rich, complex foods like durian. - Specific allergies or intolerances to compounds in durian can trigger adverse reactions leading to stomach distress. 6. **Interaction with Other Foods** Combining durian with other foods, particularly certain types that don’t mix well with rich, fatty, or high-fiber foods, might cause problems. **Reasoning:** - Consuming durian alongside alcohol, dairy, or other high-fat foods can stress the digestive system, leading to nausea or stomach pain. - Certain food combinations might inhibit proper digestion, causing discomfort. Conclusion While durian is a nutritious and delicious fruit for many people, its complex composition can lead to stomach discomfort for others. Factors like individual digestive sensitivity, high sulfur and fiber content, fermentation processes, overeating, preexisting conditions, and interaction with other foods all contribute to this potential issue. To minimize discomfort, it is advisable for individuals to consume durian in moderation and pay attention to their body’s response.
##### Figure 10: An example from StrategyQA dataset.
-----
#### Question: Solve for x > 0 in the following arithmetic sequence: 1[2], x[2], 3[2], . . ..
[Simplified CoT] The term x[2] is simply the average of 1[2] = 1 and 3[2] = 9, so x[2] = (1 + 9)/2 = 5. Because x > 0,
√ x = 5 .
[Detailed CoT] To solve for x in the arithmetic sequence 1[2], x[2], 3[2], . . . with x > 0, we will follow these detailed steps: 1. **Define the Terms of the Sequence**: The given sequence is 1[2], x[2], 3[2], . . .. These are the first three terms of the sequence, where 1[2] = 1, x[2], and 3[2] = 9. 2. **Arithmetic Sequence Property**: In an arithmetic sequence, the difference between consecutive terms is constant. Let this common difference be d. 3. **Set Up Equations for the Common Difference**: From the sequence, we have:
x[2] 1[2] = d and 3[2] x[2] = d − −
Thus, the two equations based on the common difference d are:
x[2] 1 = d and 9 x[2] = d − −
4. **Equating the Common Differences**: Since both expressions equal d, we can set them equal to each other: x[2] 1 = 9 x[2] − −
5. **Solve the Equation**: Combine like terms to solve for x[2]:
x[2] 1 = 9 x[2] − −
Add x[2] to both sides: 2x[2] 1 = 9 −
Add 1 to both sides: 2x[2] = 10
Divide by 2: x[2] = 5
Since x > 0, take the positive square root of both sides to find x:
#### √ x =
#### 5
#### Therefore, the solution for x in the given arithmetic sequence is:
#### √ x =
#### 5
##### Figure 11: An example from MATH-Algebra dataset.
-----
#### Question: How many positive three-digit integers with each digit greater than 4 are divisible by 6?
[Simplified CoT] To be divisible by 6, a number must have its digits add up to a multiple of 3, and be even. Therefore, for the hundreds place, the possible digits are 5, 6, 7, 8, 9, for the tens place the possible digits also are { } 5, 6, 7, 8, 9, and for the ones digit, you can only choose from 6, 8 . { } { } First, let us choose 6 for the ones place. The other two digits must add up to a multiple of 3, making a total of 8 pairs that satisfy that condition:
5, 7, 6, 6, 6, 9, 7, 5, 7, 8, 8, 7, 9, 6, 9, 9 . { } { } { } { } { } { } { } { }
Next, let us choose 8 for the ones place. The other two digits must be congruent to 1 mod 3, making a total of 8 pairs that satisfy that condition:
5, 5, 5, 8, 6, 7, 7, 6, 7, 9, 8, 5, 8, 8, 9, 7 . { } { } { } { } { } { } { } { }
This makes a total of 16 numbers.
[Detailed CoT] To determine how many positive three-digit integers with each digit greater than 4 are divisible by 6, we need to consider the criteria that an integer must meet to be divisible by 6. Specifically, a number is divisible by 6 if it is divisible by both 2 and 3. Step-by-step Reasoning 1. **Divisibility by 2:** - A number is divisible by 2 if its last digit is even. - The possible digits for a three-digit number where each digit is greater than 4 are 5, 6, 7, 8, 9. - Among these digits, the even digits are 6 and 8. 2. **Divisibility by 3:** - A number is divisible by 3 if the sum of its digits is divisible by 3. - We need to ensure the sum of the digits of the number fits this criterion. Enumeration of Valid Numbers Given the constraints (digits greater than 4 and the number must meet divisibility criteria for both 2 and 3), we can construct our solution: **Step 1: Form digits** The three-digit number is represented as abc, where a, b, and c are its digits. **Step 2: Identify valid choices for c (last digit, must be even):** Potential values for c can be 6 or 8. Let’s assume c = 6: Now we find valid pairs (a, b) such that the sum a + b + 6 is divisible by 3.
- Valid digits for a and b are 5, 6, 7, 8, and 9. Let’s check all pairs (a, b): For c = 6:
- 6 = a + b 0 (mod 3) - Possible pairs (a, b) where a + b must be divisible by 3: - (5, 7), (5, 9) - (6, ⇒ ≡ 6), (6, 9) - (7, 5), (7, 8) - (8, 7) - (9, 5), (9, 6) For c = 8: - 8 = a + b 1 (mod 3) - Possible pairs (a, b) where a + b + 8 results in sum modulo 3 as: ⇒ ≡
- (a, b) needs to sum up to 1 (mod 3), this means: - (5, 5) - (5, 8) - (6, 7) - (7, 6), (7, 9) - (8, 5), (8, 8) - (9, 9) So combining both : - Pairs with c = 6: 10 pairs. - Pairs with c = 8: 8 pairs. **Step 3: Combine and count the numbers of valid solutions:** Hence, total such numbers = Total Pairs= 10 + 8 = 18. Thus, there are 18 largernumber possible solutions that satisfy the conditions: three-digit positive integers, each digit greater than 4, and divisible by 6.
##### Figure 12: An example from MATH-Counting dataset.
-----
###### Question: Square ABCD has side length 1 unit. Points E and F are on sides AB and CB, respectively, with AE = CF . When the square is folded along the lines DE and DF, sides AD and CD coincide and lie on diagonal BD. The length of √ segment AE can be expressed in the form k − m units. What is the integer value of k + m?
[Simplified CoT] We start by drawing a diagram. When the paper is folded, sides AD and CD coincide on the longer dashed line, and points A and C meet at G, as you can see below. [asy] draw((0,0)–(1,0)–(1,1)–(0,1)–cycle); draw((0,0)–(1,.4)); draw((0,0)–(.4,1)); draw((1,.4)– (.4,1),dashed); draw((0,0)–(.7,.7),dashed); label("A",(0,1), NW); label("B",(1,1), NE); label("C",(1,0), SE); label("D",(0,0), SW); label("F ",(1,.4), E); label("E",(.4,1), N); label("G",(.7,.7), NE); [/asy] Now, we assign variables. We are looking for the length of AE, so let AE = x. Then, BE = 1 − x. Because of the symmetry of the square and the fold, everything to the left of line BD is a mirror image of everything to the right of BD. Thus, △BEF is an isosceles right triangle (45-45-90), so √ √ EF = 2EB = 2(1 − x). Also, △EGB and △FGB are congruent 45-45-90 triangles, so GB = [EB]√2 [=][ (1]√[−]2[x][)] [.]
Also, notice that because the way the paper is folded (its original position versus its final position), we have more congruent triangles, △AED =[∼] △GED. This means that AD = GD = 1. √ Lastly, notice that since G is on BD, we have BD = BG + GD. BD is a diagonal of the square, so it has side length 2,
GD = 1, and GB = [(1]√[−][x][)]
2 [. Thus, our equation becomes]
###### √
###### 2 = 1 + [(1]√[ −] [x][)] .
2
###### √ √ √ √ √ Multiplying both sides by 2 yields 2 = 2 + 1 − x; solving for x yields x = 2 − 1. Thus, AE = 2 − 1 = k − m, and
we see that k + m = 2 + 1 = 3 .
[Detailed CoT] First, we label the points on square ABCD as follows: with A(0, 1), B(0, 0), C(1, 0), and D(1, 1). Let E be a point on AB with coordinate (0, y), where y = AE, and let F be a point on CB with coordinate (x, 0), where x = CF . Since AE = CF, it follows that y = x. When the square is folded along DE and DF, the folded sides AD and CD align and coincide along diagonal BD. We know AD will lie on BD, so point A will overlap somewhere on line BD. The diagonal BD of the square can be parameterized as the line from B(0, 0) to D(1, 1). The equation of line BD is:
y = x
Considering points E on AB, having coordinates (0, y), and points F on CB, having coordinates (x, 0): When folding AD onto BD, point A(0, 1) will fold along some line segment DE. To understand where A falls upon folding onto BD, we calculate the perpendicular distance from A to BD. The distance d from a point (x1, y1) to a line ax + by + c = 0 is given by:
d = [|][ax]√[1][ +][ by][1][ +][ c][|]
a[2] + b[2]
In our case, the equation of the line BD is y = x and can also be written as:
x − y = 0
Thus, we have a = 1, b = −1, and c = 0. The distance from point A(0, 1) to line x − y = 0 is:
###### √
1 2
d = [|][1][ ·][ 0 + (]�1[2] + ([−][1)]−[ ·]1)[ 1 + 0][2] [|] = √[|][0]1 + 1 =[ −] [1][|] √2 = 2
Since E lies directly on the y-axis at y = AE, and E’s distance should match the perpendicular distance from A to BD, we solve for y = AE:
√
2
y = x =
2
###### Thus,Here AE x = = y =√22 √[=]22 [. However, the problem requires the expression in the form]√2 · [1]2 [=] √2 · 2√√44 [=] √2 − 1 √k − m:
√ √ Setting k − m = 2 − 1, we get k = 2 and m = 1. Thus:
k + m = 2 + 1 = 3
The integer value of k + m is: 3
##### Figure 13: An example from MATH-Geometry dataset.
-----
###### Question: "Tell me about Charles, Prince of Wales."
[Length 100] Charles was born at 21:14 (GMT) on 14 November 1948, during the reign of his maternal grandfather, King George VI, as the first child of Princess Elizabeth, Duchess of Edinburgh (later Queen Elizabeth II), and Philip, Duke of Edinburgh. He was delivered by Caesarean section at Buckingham Palace. His parents had three more children, Anne (born 1950), Andrew (born 1960) and Edward (born 1964).
[Length 500] Charles was born at 21:14 (GMT) on 14 November 1948, during the reign of his maternal grandfather, King George VI, as the first child of Princess Elizabeth, Duchess of Edinburgh (later Queen Elizabeth II), and Philip, Duke of Edinburgh. He was delivered by Caesarean section at Buckingham Palace. His parents had three more children, Anne (born 1950), Andrew (born 1960) and Edward (born 1964). He was christened Charles Philip Arthur George on 15 December 1948 in the Music Room of Buckingham Palace by the Archbishop of Canterbury, Geoffrey Fisher. George VI died on 6 February 1952 and Charles’s mother acceded to the throne as Elizabeth II; Charles immediately became the heir apparent. Under a charter of Edward III in 1337, and as the monarch’s eldest son, he automatically assumed the traditional titles of Duke of Cornwall and, in the Scottish peerage, the titles Duke of Rothesay, Earl of Carrick, Baron of Renfrew, Lord of the Isles, and Prince and Great Steward of Scotland. The following year, Charles attended his mother’s coronation at Westminster Abbey. When Charles turned five, Catherine Peebles was appointed as governess to oversee his education at Buckingham Palace. He then commenced classes at Hill House School in west London in November 1956. Charles was the first heir apparent to attend school, rather than be educated by a private tutor. He did not receive preferential treatment from the school’s founder and headmaster, Stuart Townend, who advised the Queen to have Charles train in football, because the boys were never deferential to anyone on the football field. Charles subsequently attended two of his father’s former schools: Cheam School in Hampshire, from 1958, followed by Gordonstoun, in the north-east of Scotland, beginning classes there in April 1962. He later became patron of Gordonstoun in May 2024. In his 1994 authorised biography by Jonathan Dimbleby, Charles’s parents were described as physically and emotionally distant and Philip was blamed for his disregard of Charles’s sensitive nature, including forcing him to attend Gordonstoun, where he was bullied.
[Length 1000] Charles was born at 21:14 (GMT) on 14 November 1948, during the reign of his maternal grandfather, King George VI, as the first child of Princess Elizabeth, Duchess of Edinburgh (later Queen Elizabeth II), and Philip, Duke of Edinburgh. He was delivered by Caesarean section at Buckingham Palace. His parents had three more children, Anne (born 1950), Andrew (born 1960) and Edward (born 1964). He was christened Charles Philip Arthur George on 15 December 1948 in the Music Room of Buckingham Palace by the Archbishop of Canterbury, Geoffrey Fisher. George VI died on 6 February 1952 and Charles’s mother acceded to the throne as Elizabeth II; Charles immediately became the heir apparent. Under a charter of Edward III in 1337, and as the monarch’s eldest son, he automatically assumed the traditional titles of Duke of Cornwall and, in the Scottish peerage, the titles Duke of Rothesay, Earl of Carrick, Baron of Renfrew, Lord of the Isles, and Prince and Great Steward of Scotland. The following year, Charles attended his mother’s coronation at Westminster Abbey. When Charles turned five, Catherine Peebles was appointed as governess to oversee his education at Buckingham Palace. He then commenced classes at Hill House School in west London in November 1956. Charles was the first heir apparent to attend school, rather than be educated by a private tutor. He did not receive preferential treatment from the school’s founder and headmaster, Stuart Townend, who advised the Queen to have Charles train in football, because the boys were never deferential to anyone on the football field. Charles subsequently attended two of his father’s former schools: Cheam School in Hampshire, from 1958, followed by Gordonstoun, in the north-east of Scotland, beginning classes there in April 1962. He later became patron of Gordonstoun in May 2024. In his 1994 authorised biography by Jonathan Dimbleby, Charles’s parents were described as physically and emotionally distant and Philip was blamed for his disregard of Charles’s sensitive nature, including forcing him to attend Gordonstoun, where he was bullied. Though Charles reportedly described Gordonstoun, noted for its especially rigorous curriculum, as "Colditz in kilts", he later praised the school, stating it had taught him "a great deal about myself and my own abilities and disabilities". He said in a 1975 interview he was "glad" he had attended Gordonstoun and that the "toughness of the place" was "much exaggerated". In 1966 Charles spent two terms at the Timbertop campus of Geelong Grammar School in Victoria, Australia, during which time he visited Papua New Guinea on a school trip with his history tutor, Michael Collins Persse. In 1973 Charles described his time at Timbertop as the most enjoyable part of his whole education. Upon his return to Gordonstoun, he emulated his father in becoming head boy, and left in 1967 with six GCE O-levels and two A-levels in history and French, at grades B and C respectively. On his education, Charles later remarked, "I didn’t enjoy school as much as I might have; but, that was only because I’m happier at home than anywhere else". Charles broke royal tradition when he proceeded straight to university after his A-levels, rather than joining the British Armed Forces. In October 1967, he was admitted to Trinity College, Cambridge, where he studied archaeology and anthropology for the first part of the Tripos and then switched to history for the second part. During his second year, he attended the University College of Wales in Aberystwyth, studying Welsh history and the Welsh language for one term. Charles became the first British heir apparent to earn a university degree, graduating in June 1970 from the University of Cambridge with a 2:2 Bachelor of Arts (BA) degree. Following standard practice, in August 1975, his Bachelor of Arts was promoted to a Master of Arts (MA Cantab) degree. Charles served in the Royal Air Force (RAF) and the Royal Navy.
##### Figure 14: Three examples from Wiki Knowledge dataset.
-----
#### Question: "Tell me about Eucosmophora atlantis."
[Unpopular] Eucosmophora atlantis is a moth of the family Gracillariidae. It is known from Costa Rica. The length of the forewings is 3.6–4.5 mm for males and 4-4.8 mm. for females. The larvae probably feed on a Sapotaceae species and probably mine the leaves of their host plant.
##### Figure 15: An example from Wiki Knowledge dataset.
-----
### B Results on gemma-2-2b
#### B.1 Pre-trained LLM on Correct Responses
B.1.1 Reasoning Tasks
The visualizations and statistical results on MATH tasks: MATH-Algebra (Figure 135, Table 4), MATH-Counting (Figure 136, Table 5), MATH- Geometry (Figure 137, Table 6).
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
Simplified 0.61 0.53 0.49 0.53
_sQ_ Detailed 0.45 0.37 0.41 0.39
Simplified 0.51 0.45 0.48 0.48
_sK_ Detailed 0.38 0.35 0.41 0.38
Simplified 2.11 2.05 0.70 1.82
_sV_ Detailed 1.25 1.39 0.59 1.20
Algebra Simplified 1.91 1.48 0.80 1.42
_sO_ Detailed 1.01 1.03 0.45 0.87
Simplified 0.02 0.02 0.02 0.02
_rQ_ Detailed 0.02 0.02 0.02 0.02
Simplified 0.03 0.01 0.03 0.02
_rK_ Detailed 0.03 0.02 0.03 0.02
Simplified 0.03 0.04 0.05 0.03
_rV_ Detailed 0.02 0.03 0.03 0.03
Simplified 0.01 0.02 0.04 0.02
_rO_ Detailed 0.01 0.02 0.03 0.02
##### Table 4: Statistical results for MATH-Algebra using gemma-2-2b on correct responses.
Figure 16: Visualization for MATH-Algebra using gemma-2-2b on correct responses.
#### The visualizations and statistical results on other reasoning tasks: AQuA (Figure 138, Table 7), GSM8K (Figure 139, Table 8), StrategyQA (Fig- ure 140, Table 9), ECQA (Figure 141, Table 10), CREAK (Figure 142, Table 11), Sensemaking (Fig- ure 143, Table 12).
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|Simplified 0.72 0.71 0.54 0.65 sQ Detailed 0.56 0.52 0.43 0.50 Simplified 0.58 0.61 0.52 0.59 sK Detailed 0.46 0.47 0.47 0.47 Simplified 2.40 2.10 0.91 1.97 sV Detailed 1.57 1.60 0.84 1.45 Counting Simplified 2.20 1.54 0.80 1.53 sO Detailed 1.27 1.23 0.48 1.04 Simplified 0.02 0.02 0.02 0.02 rQ Detailed 0.02 0.02 0.02 0.02 Simplified 0.03 0.01 0.02 0.02 rK Detailed 0.03 0.02 0.03 0.02 Simplified 0.03 0.03 0.04 0.03 rV Detailed 0.03 0.03 0.03 0.03 Simplified 0.01 0.03 0.03 0.02 rO Detailed 0.01 0.02 0.02 0.02|sQ|Simplified Detailed|0.72 0.71 0.54 0.65 0.56 0.52 0.43 0.50|
||sK|Simplified Detailed|0.58 0.61 0.52 0.59 0.46 0.47 0.47 0.47|
||sV|Simplified Detailed|2.40 2.10 0.91 1.97 1.57 1.60 0.84 1.45|
||sO|Simplified Detailed|2.20 1.54 0.80 1.53 1.27 1.23 0.48 1.04|
||rQ|Simplified Detailed|0.02 0.02 0.02 0.02 0.02 0.02 0.02 0.02|
||rK|Simplified Detailed|0.03 0.01 0.02 0.02 0.03 0.02 0.03 0.02|
##### Table 5: Statistical results for MATH-Counting using gemma-2-2b on correct responses.
Figure 17: Visualization for MATH-Counting using gemma-2-2b on correct responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
Simplified 0.70 0.64 0.49 0.62
_sQ_ Detailed 0.66 0.54 0.45 0.55
Simplified 0.59 0.54 0.47 0.54
_sK_ Detailed 0.55 0.54 0.49 0.53
Simplified 2.12 1.92 0.51 1.71
_sV_ Detailed 1.76 1.88 0.89 1.65
Geometry Simplified 1.87 1.44 0.68 1.36
_sO_ Detailed 1.35 1.34 0.51 1.13
Simplified 0.02 0.02 0.01 0.02
_rQ_ Detailed 0.02 0.02 0.03 0.02
Simplified 0.03 0.01 0.03 0.02
_rK_ Detailed 0.03 0.02 0.03 0.03
Simplified 0.03 0.03 0.04 0.03
_rV_ Detailed 0.02 0.03 0.03 0.03
Simplified 0.01 0.02 0.03 0.02
_rO_ Detailed 0.01 0.02 0.03 0.02
##### Table 6: Statistical results for MATH-Geometry using gemma-2-2b on correct responses.
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|Simplified 0.61 0.53 0.49 0.53 sQ Detailed 0.45 0.37 0.41 0.39 Simplified 0.51 0.45 0.48 0.48 sK Detailed 0.38 0.35 0.41 0.38 Simplified 2.11 2.05 0.70 1.82 sV Detailed 1.25 1.39 0.59 1.20 Algebra Simplified 1.91 1.48 0.80 1.42 sO Detailed 1.01 1.03 0.45 0.87 Simplified 0.02 0.02 0.02 0.02 rQ Detailed 0.02 0.02 0.02 0.02 Simplified 0.03 0.01 0.03 0.02 rK Detailed 0.03 0.02 0.03 0.02 Simplified 0.03 0.04 0.05 0.03 rV Detailed 0.02 0.03 0.03 0.03 Simplified 0.01 0.02 0.04 0.02 rO Detailed 0.01 0.02 0.03 0.02||||
||sK|Simplified Detailed|0.51 0.45 0.48 0.48 0.38 0.35 0.41 0.38|
||sV|Simplified Detailed|2.11 2.05 0.70 1.82 1.25 1.39 0.59 1.20|
||sO|Simplified Detailed|1.91 1.48 0.80 1.42 1.01 1.03 0.45 0.87|
||rQ|Simplified Detailed|0.02 0.02 0.02 0.02 0.02 0.02 0.02 0.02|
||rK|Simplified Detailed|0.03 0.01 0.03 0.02 0.03 0.02 0.03 0.02|
||rV|Simplified Detailed|0.03 0.04 0.05 0.03 0.02 0.03 0.03 0.03|
||rO|Simplified Detailed|0.01 0.02 0.04 0.02 0.01 0.02 0.03 0.02|
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|Simplified 0.70 0.64 0.49 0.62 sQ Detailed 0.66 0.54 0.45 0.55 Simplified 0.59 0.54 0.47 0.54 sK Detailed 0.55 0.54 0.49 0.53 Simplified 2.12 1.92 0.51 1.71 sV Detailed 1.76 1.88 0.89 1.65 Geometry Simplified 1.87 1.44 0.68 1.36 sO Detailed 1.35 1.34 0.51 1.13 Simplified 0.02 0.02 0.01 0.02 rQ Detailed 0.02 0.02 0.03 0.02 Simplified 0.03 0.01 0.03 0.02 rK Detailed 0.03 0.02 0.03 0.03 Simplified 0.03 0.03 0.04 0.03 rV Detailed 0.02 0.03 0.03 0.03 Simplified 0.01 0.02 0.03 0.02 rO Detailed 0.01 0.02 0.03 0.02|sQ|Simplified Detailed|0.70 0.64 0.49 0.62 0.66 0.54 0.45 0.55|
||sK|Simplified Detailed|0.59 0.54 0.47 0.54 0.55 0.54 0.49 0.53|
||sV|Simplified Detailed|2.12 1.92 0.51 1.71 1.76 1.88 0.89 1.65|
||sO|Simplified Detailed|1.87 1.44 0.68 1.36 1.35 1.34 0.51 1.13|
||rQ|Simplified Detailed|0.02 0.02 0.01 0.02 0.02 0.02 0.03 0.02|
||rK|Simplified Detailed|0.03 0.01 0.03 0.02 0.03 0.02 0.03 0.03|
||rV|Simplified Detailed|0.03 0.03 0.04 0.03 0.02 0.03 0.03 0.03|
||rO|Simplified Detailed|0.01 0.02 0.03 0.02 0.01 0.02 0.03 0.02|
-----
##### Figure 18: Visualization for MATH-Geometry using gemma-2-2b on correct responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
None 3.29 2.07 3.44 2.77
_sQ_ Simplified 1.65 1.06 0.90 1.17
Detailed 0.53 0.46 0.46 0.46
None 4.59 2.75 3.43 3.38
_sK_ Simplified 1.77 1.11 0.88 1.22
Detailed 0.47 0.50 0.50 0.49
None 9.37 9.72 2.28 8.31
_sV_ Simplified 4.19 3.19 1.00 3.07
Detailed 1.46 1.69 0.79 1.45
None 10.04 4.36 1.91 5.17
AQuA _sO_ Simplified 3.87 2.19 0.94 2.26
Detailed 1.23 1.25 0.55 1.06
None 0.03 0.09 0.10 0.08
_rQ_ Simplified 0.02 0.03 0.02 0.02
Detailed 0.02 0.02 0.02 0.02
None 0.04 0.05 0.09 0.05
_rK_ Simplified 0.03 0.02 0.03 0.02
Detailed 0.03 0.02 0.03 0.02
None 0.04 0.07 0.03 0.05
_rV_ Simplified 0.03 0.04 0.05 0.04
Detailed 0.03 0.03 0.03 0.03
None 0.04 0.05 0.07 0.05
_rO_ Simplified 0.01 0.03 0.04 0.03
Detailed 0.01 0.02 0.03 0.02
##### Table 7: Statistical results for AQuA using gemma-2-2b on correct responses.
#### B.1.2 Wiki Tasks
The visualizations and statistical results on Wiki tasks are shown in Figure 144 and Table 13.
B.2 Pre-trained LLM on Irrelevant Responses
B.2.1 Reasoning Tasks
The visualizations and statistical results on MATH tasks: MATH-Algebra (Figure 145, Table 14), MATH-Counting (Figure 146, Table 15), MATH- Geometry (Figure 147, Table 16). The visualizations and statistical results on other
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 2.55 1.15 3.91 1.97 sQ Simplified 0.93 0.86 0.76 0.84 Detailed 0.48 0.47 0.45 0.46 None 2.37 2.30 4.75 2.65 sK Simplified 0.81 0.92 0.69 0.86 Detailed 0.42 0.55 0.49 0.50 None 9.17 9.09 3.56 8.21 sV Simplified 2.48 2.15 0.87 1.98 Detailed 1.40 1.62 0.82 1.39 None 9.14 3.86 2.05 4.61 GSM8K sO Simplified 2.13 1.54 0.57 1.43 Detailed 1.20 1.19 0.45 1.00 None 0.03 0.03 0.08 0.05 rQ Simplified 0.02 0.03 0.02 0.02 Detailed 0.02 0.02 0.02 0.02 None 0.04 0.04 0.07 0.04 rK Simplified 0.04 0.02 0.02 0.02 Detailed 0.03 0.02 0.02 0.02 None 0.07 0.05 0.04 0.05 rV Simplified 0.05 0.04 0.04 0.04 Detailed 0.04 0.04 0.04 0.04 None 0.05 0.04 0.07 0.05 rO Simplified 0.02 0.04 0.04 0.03 Detailed 0.01 0.03 0.03 0.02|sQ|None Simplified Detailed|2.55 1.15 3.91 1.97 0.93 0.86 0.76 0.84 0.48 0.47 0.45 0.46|
||sK|None Simplified Detailed|2.37 2.30 4.75 2.65 0.81 0.92 0.69 0.86 0.42 0.55 0.49 0.50|
||sV|None Simplified Detailed|9.17 9.09 3.56 8.21 2.48 2.15 0.87 1.98 1.40 1.62 0.82 1.39|
||sO|None Simplified Detailed|9.14 3.86 2.05 4.61 2.13 1.54 0.57 1.43 1.20 1.19 0.45 1.00|
||rQ|None Simplified Detailed|0.03 0.03 0.08 0.05 0.02 0.03 0.02 0.02 0.02 0.02 0.02 0.02|
||rK|None Simplified Detailed|0.04 0.04 0.07 0.04 0.04 0.02 0.02 0.02 0.03 0.02 0.02 0.02|
||rV|None Simplified Detailed|0.07 0.05 0.04 0.05 0.05 0.04 0.04 0.04 0.04 0.04 0.04 0.04|
||rO|None Simplified Detailed|0.05 0.04 0.07 0.05 0.02 0.04 0.04 0.03 0.01 0.03 0.03 0.02|
##### Table 8: Statistical results for GSM8K using gemma-2- 2b on correct responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
None 10.57 6.09 4.57 7.07
_sQ_ Simplified 1.35 0.62 1.26 0.95
Detailed 0.61 0.66 0.53 0.60
None 10.24 9.84 4.23 9.08
_sK_ Simplified 0.99 1.00 1.11 0.98
Detailed 0.47 0.63 0.47 0.53
None 34.59 42.47 20.23 36.18
_sV_ Simplified 3.80 3.19 1.85 3.12
Detailed 2.06 2.02 0.90 1.80
None 27.75 12.40 4.63 13.80
StrategyQA _sO_ Simplified 3.25 1.72 1.19 1.95
Detailed 1.70 1.63 0.84 1.44
None 0.03 0.06 0.08 0.06
_rQ_ Simplified 0.03 0.03 0.03 0.03
Detailed 0.03 0.02 0.02 0.02
None 0.02 0.03 0.04 0.03
_rK_ Simplified 0.04 0.02 0.03 0.02
Detailed 0.03 0.02 0.02 0.02
None 0.07 0.09 0.04 0.07
_rV_ Simplified 0.04 0.06 0.05 0.05
Detailed 0.02 0.04 0.04 0.03
None 0.05 0.03 0.07 0.04
_rO_ Simplified 0.02 0.04 0.07 0.04
Detailed 0.01 0.02 0.03 0.02
##### Table 9: Statistical results for StrategyQA using gemma- 2-2b on correct responses.
#### reasoning tasks: AQuA (Figure 148, Table 17), StrategyQA (Figure 150, Table 18), ECQA (Figure
151, Table 19), CREAK (Figure 152, Table 20), Sensemaking (Figure 153, Table 21).
B.2.2 Wiki Tasks The visualizations and statistical results on Wiki tasks are shown in Figure 154 and Table 22.
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 3.29 2.07 3.44 2.77 sQ Simplified 1.65 1.06 0.90 1.17 Detailed 0.53 0.46 0.46 0.46 None 4.59 2.75 3.43 3.38 sK Simplified 1.77 1.11 0.88 1.22 Detailed 0.47 0.50 0.50 0.49 None 9.37 9.72 2.28 8.31 sV Simplified 4.19 3.19 1.00 3.07 Detailed 1.46 1.69 0.79 1.45 None 10.04 4.36 1.91 5.17 AQuA sO Simplified 3.87 2.19 0.94 2.26 Detailed 1.23 1.25 0.55 1.06 None 0.03 0.09 0.10 0.08 rQ Simplified 0.02 0.03 0.02 0.02 Detailed 0.02 0.02 0.02 0.02 None 0.04 0.05 0.09 0.05 rK Simplified 0.03 0.02 0.03 0.02 Detailed 0.03 0.02 0.03 0.02 None 0.04 0.07 0.03 0.05 rV Simplified 0.03 0.04 0.05 0.04 Detailed 0.03 0.03 0.03 0.03 None 0.04 0.05 0.07 0.05 rO Simplified 0.01 0.03 0.04 0.03 Detailed 0.01 0.02 0.03 0.02||||
||sK|None Simplified Detailed|4.59 2.75 3.43 3.38 1.77 1.11 0.88 1.22 0.47 0.50 0.50 0.49|
||sV|None Simplified Detailed|9.37 9.72 2.28 8.31 4.19 3.19 1.00 3.07 1.46 1.69 0.79 1.45|
||sO|None Simplified Detailed|10.04 4.36 1.91 5.17 3.87 2.19 0.94 2.26 1.23 1.25 0.55 1.06|
||rQ|None Simplified Detailed|0.03 0.09 0.10 0.08 0.02 0.03 0.02 0.02 0.02 0.02 0.02 0.02|
||rK|None Simplified Detailed|0.04 0.05 0.09 0.05 0.03 0.02 0.03 0.02 0.03 0.02 0.03 0.02|
||rV|None Simplified Detailed|0.04 0.07 0.03 0.05 0.03 0.04 0.05 0.04 0.03 0.03 0.03 0.03|
||rO|None Simplified Detailed|0.04 0.05 0.07 0.05 0.01 0.03 0.04 0.03 0.01 0.02 0.03 0.02|
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 10.57 6.09 4.57 7.07 sQ Simplified 1.35 0.62 1.26 0.95 Detailed 0.61 0.66 0.53 0.60 None 10.24 9.84 4.23 9.08 sK Simplified 0.99 1.00 1.11 0.98 Detailed 0.47 0.63 0.47 0.53 None 34.59 42.47 20.23 36.18 sV Simplified 3.80 3.19 1.85 3.12 Detailed 2.06 2.02 0.90 1.80 None 27.75 12.40 4.63 13.80 StrategyQA sO Simplified 3.25 1.72 1.19 1.95 Detailed 1.70 1.63 0.84 1.44 None 0.03 0.06 0.08 0.06 rQ Simplified 0.03 0.03 0.03 0.03 Detailed 0.03 0.02 0.02 0.02 None 0.02 0.03 0.04 0.03 rK Simplified 0.04 0.02 0.03 0.02 Detailed 0.03 0.02 0.02 0.02 None 0.07 0.09 0.04 0.07 rV Simplified 0.04 0.06 0.05 0.05 Detailed 0.02 0.04 0.04 0.03 None 0.05 0.03 0.07 0.04 rO Simplified 0.02 0.04 0.07 0.04 Detailed 0.01 0.02 0.03 0.02|sQ|None Simplified Detailed|10.57 6.09 4.57 7.07 1.35 0.62 1.26 0.95 0.61 0.66 0.53 0.60|
||sK|None Simplified Detailed|10.24 9.84 4.23 9.08 0.99 1.00 1.11 0.98 0.47 0.63 0.47 0.53|
||sV|None Simplified Detailed|34.59 42.47 20.23 36.18 3.80 3.19 1.85 3.12 2.06 2.02 0.90 1.80|
||sO|None Simplified Detailed|27.75 12.40 4.63 13.80 3.25 1.72 1.19 1.95 1.70 1.63 0.84 1.44|
||rQ|None Simplified Detailed|0.03 0.06 0.08 0.06 0.03 0.03 0.03 0.03 0.03 0.02 0.02 0.02|
||rK|None Simplified Detailed|0.02 0.03 0.04 0.03 0.04 0.02 0.03 0.02 0.03 0.02 0.02 0.02|
||rV|None Simplified Detailed|0.07 0.09 0.04 0.07 0.04 0.06 0.05 0.05 0.02 0.04 0.04 0.03|
||rO|None Simplified Detailed|0.05 0.03 0.07 0.04 0.02 0.04 0.07 0.04 0.01 0.02 0.03 0.02|
-----
##### Figure 19: Visualization for AQuA using gemma-2-2b on correct responses.
Figure 20: Visualization for GSM8K using gemma-2-2b on correct responses.
-----
##### Figure 21: Visualization for StrategyQA using gemma-2-2b on correct responses.
Figure 22: Visualization for ECQA using gemma-2-2b on correct responses.
-----
##### Figure 23: Visualization for CREAK using gemma-2-2b on correct responses.
Figure 24: Visualization for Sensemaking using gemma-2-2b on correct responses.
-----
##### Figure 25: Visualization for Wiki tasks using gemma-2-2b on correct responses.
-----
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 9.01 4.68 6.31 6.48 sQ Simplified 2.05 1.53 2.12 1.74 Detailed 0.69 0.69 0.53 0.64 None 7.59 8.43 10.70 8.79 sK Simplified 1.55 2.23 2.03 1.93 Detailed 0.54 0.73 0.62 0.63 None 23.08 25.63 12.47 23.21 sV Simplified 6.63 5.80 3.85 5.84 Detailed 2.05 2.02 1.21 1.86 None 24.96 9.49 4.32 11.66 Sensemaking sO Simplified 6.47 2.87 1.89 3.50 Detailed 1.73 1.74 0.67 1.47 None 0.02 0.06 0.07 0.06 rQ Simplified 0.03 0.03 0.05 0.03 Detailed 0.03 0.02 0.02 0.02 None 0.01 0.04 0.09 0.04 rK Simplified 0.03 0.02 0.04 0.03 Detailed 0.04 0.02 0.03 0.02 None 0.05 0.07 0.04 0.06 rV Simplified 0.05 0.05 0.03 0.04 Detailed 0.03 0.04 0.04 0.03 None 0.04 0.05 0.10 0.05 rO Simplified 0.03 0.04 0.05 0.03 Detailed 0.01 0.03 0.02 0.02|sQ|None Simplified Detailed|9.01 4.68 6.31 6.48 2.05 1.53 2.12 1.74 0.69 0.69 0.53 0.64|
||sK|None Simplified Detailed|7.59 8.43 10.70 8.79 1.55 2.23 2.03 1.93 0.54 0.73 0.62 0.63|
||sV|None Simplified Detailed|23.08 25.63 12.47 23.21 6.63 5.80 3.85 5.84 2.05 2.02 1.21 1.86|
||sO|None Simplified Detailed|24.96 9.49 4.32 11.66 6.47 2.87 1.89 3.50 1.73 1.74 0.67 1.47|
||rQ|None Simplified Detailed|0.02 0.06 0.07 0.06 0.03 0.03 0.05 0.03 0.03 0.02 0.02 0.02|
||rK|None Simplified Detailed|0.01 0.04 0.09 0.04 0.03 0.02 0.04 0.03 0.04 0.02 0.03 0.02|
||rV|None Simplified Detailed|0.05 0.07 0.04 0.06 0.05 0.05 0.03 0.04 0.03 0.04 0.04 0.03|
||rO|None Simplified Detailed|0.04 0.05 0.10 0.05 0.03 0.04 0.05 0.03 0.01 0.03 0.02 0.02|
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 17.07 8.82 7.27 10.75 sQ Simplified 1.90 1.34 1.48 1.47 Detailed 0.72 0.77 0.56 0.69 None 16.58 13.87 14.35 14.48 sK Simplified 1.52 2.08 2.22 1.82 Detailed 0.54 0.77 0.63 0.65 None 46.14 51.57 24.02 45.96 sV Simplified 6.01 5.32 3.23 5.19 Detailed 2.51 2.32 1.52 2.21 None 42.31 18.79 3.24 20.68 ECQA sO Simplified 5.41 2.82 1.33 3.04 Detailed 2.08 1.92 0.75 1.68 None 0.02 0.07 0.10 0.06 rQ Simplified 0.02 0.04 0.03 0.03 Detailed 0.02 0.02 0.02 0.02 None 0.03 0.03 0.07 0.04 rK Simplified 0.03 0.02 0.03 0.03 Detailed 0.03 0.02 0.02 0.02 None 0.05 0.07 0.06 0.06 rV Simplified 0.05 0.06 0.04 0.05 Detailed 0.03 0.04 0.03 0.03 None 0.03 0.04 0.09 0.05 rO Simplified 0.02 0.04 0.05 0.04 Detailed 0.01 0.03 0.02 0.02|sQ|None Simplified Detailed|17.07 8.82 7.27 10.75 1.90 1.34 1.48 1.47 0.72 0.77 0.56 0.69|
||sK|None Simplified Detailed|16.58 13.87 14.35 14.48 1.52 2.08 2.22 1.82 0.54 0.77 0.63 0.65|
||sV|None Simplified Detailed|46.14 51.57 24.02 45.96 6.01 5.32 3.23 5.19 2.51 2.32 1.52 2.21|
||sO|None Simplified Detailed|42.31 18.79 3.24 20.68 5.41 2.82 1.33 3.04 2.08 1.92 0.75 1.68|
||rQ|None Simplified Detailed|0.02 0.07 0.10 0.06 0.02 0.04 0.03 0.03 0.02 0.02 0.02 0.02|
||rK|None Simplified Detailed|0.03 0.03 0.07 0.04 0.03 0.02 0.03 0.03 0.03 0.02 0.02 0.02|
||rV|None Simplified Detailed|0.05 0.07 0.06 0.06 0.05 0.06 0.04 0.05 0.03 0.04 0.03 0.03|
||rO|None Simplified Detailed|0.03 0.04 0.09 0.05 0.02 0.04 0.05 0.04 0.01 0.03 0.02 0.02|
##### Table 10: Statistical results for ECQA using gemma-2- 2b on correct responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
None 15.91 7.63 9.89 10.00
_sQ_ Simplified 1.89 1.12 1.59 1.38
Detailed 0.75 0.71 0.56 0.67
None 18.49 11.02 12.53 13.18
_sK_ Simplified 1.50 1.76 1.67 1.58
Detailed 0.60 0.73 0.56 0.63
None 41.16 41.56 15.30 37.29
_sV_ Simplified 5.76 4.92 3.16 4.93
Detailed 2.45 2.32 1.09 2.10
None 40.93 15.01 4.05 18.38
CREAK _sO_ Simplified 5.45 2.46 1.60 2.97
Detailed 2.03 1.81 0.91 1.64
None 0.02 0.08 0.11 0.07
_rQ_ Simplified 0.02 0.03 0.05 0.03
Detailed 0.03 0.02 0.02 0.02
None 0.03 0.03 0.07 0.04
_rK_ Simplified 0.04 0.02 0.04 0.03
Detailed 0.03 0.02 0.02 0.02
None 0.05 0.07 0.03 0.05
_rV_ Simplified 0.04 0.06 0.03 0.05
Detailed 0.03 0.04 0.04 0.04
None 0.04 0.05 0.10 0.06
_rO_ Simplified 0.02 0.04 0.06 0.04
Detailed 0.01 0.03 0.03 0.02
##### Table 11: Statistical results for CREAK using gemma- 2-2b on correct responses.
##### Table 12: Statistical results for Sensemaking using gemma-2-2b on correct responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
Len 100 1.05 0.57 1.09 0.81
_sQ_ Len 500 0.71 0.49 0.69 0.57
Len 1000 0.65 0.48 0.54 0.52
Unpopular 1.84 0.86 1.20 1.20
Len 100 0.76 0.77 0.94 0.77
_sK_ Len 500 0.53 0.46 0.56 0.49
Len 1000 0.50 0.37 0.46 0.42
Unpopular 1.60 1.17 0.95 1.21
Len 100 3.02 2.84 1.42 2.58
_sV_ Len 500 2.38 2.07 1.03 1.94
Len 1000 2.05 1.73 0.89 1.65
Unpopular 3.96 3.47 1.89 3.27
Len 100 2.63 1.76 1.87 1.94
_sO_ Len 500 1.97 1.50 1.45 1.57
Wiki Len 1000 1.76 1.32 1.23 1.38
Unpopular 3.26 1.88 1.96 2.15
Len 100 0.03 0.02 0.02 0.02
_rQ_ Len 500 0.02 0.02 0.01 0.02
Len 1000 0.02 0.02 0.01 0.02
Unpopular 0.03 0.01 0.03 0.02
Len 100 0.04 0.02 0.02 0.02
_rK_ Len 500 0.03 0.01 0.01 0.02
Len 1000 0.03 0.01 0.01 0.02
Unpopular 0.03 0.02 0.03 0.02
Len 100 0.03 0.05 0.08 0.05
_rV_ Len 500 0.02 0.04 0.05 0.03
Len 1000 0.02 0.03 0.04 0.03
Unpopular 0.04 0.05 0.09 0.05
Len 100 0.02 0.04 0.08 0.04
_rO_ Len 500 0.01 0.02 0.04 0.02
Len 1000 0.01 0.02 0.03 0.02
Unpopular 0.02 0.04 0.08 0.04
##### Table 13: Statistical results for Wiki using gemma-2-2b on correct responses.
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 15.91 7.63 9.89 10.00 sQ Simplified 1.89 1.12 1.59 1.38 Detailed 0.75 0.71 0.56 0.67 None 18.49 11.02 12.53 13.18 sK Simplified 1.50 1.76 1.67 1.58 Detailed 0.60 0.73 0.56 0.63 None 41.16 41.56 15.30 37.29 sV Simplified 5.76 4.92 3.16 4.93 Detailed 2.45 2.32 1.09 2.10 None 40.93 15.01 4.05 18.38 CREAK sO Simplified 5.45 2.46 1.60 2.97 Detailed 2.03 1.81 0.91 1.64 None 0.02 0.08 0.11 0.07 rQ Simplified 0.02 0.03 0.05 0.03 Detailed 0.03 0.02 0.02 0.02 None 0.03 0.03 0.07 0.04 rK Simplified 0.04 0.02 0.04 0.03 Detailed 0.03 0.02 0.02 0.02 None 0.05 0.07 0.03 0.05 rV Simplified 0.04 0.06 0.03 0.05 Detailed 0.03 0.04 0.04 0.04 None 0.04 0.05 0.10 0.06 rO Simplified 0.02 0.04 0.06 0.04 Detailed 0.01 0.03 0.03 0.02|sQ|None Simplified Detailed|15.91 7.63 9.89 10.00 1.89 1.12 1.59 1.38 0.75 0.71 0.56 0.67|
||sK|None Simplified Detailed|18.49 11.02 12.53 13.18 1.50 1.76 1.67 1.58 0.60 0.73 0.56 0.63|
||sV|None Simplified Detailed|41.16 41.56 15.30 37.29 5.76 4.92 3.16 4.93 2.45 2.32 1.09 2.10|
||sO|None Simplified Detailed|40.93 15.01 4.05 18.38 5.45 2.46 1.60 2.97 2.03 1.81 0.91 1.64|
||rQ|None Simplified Detailed|0.02 0.08 0.11 0.07 0.02 0.03 0.05 0.03 0.03 0.02 0.02 0.02|
||rK|None Simplified Detailed|0.03 0.03 0.07 0.04 0.04 0.02 0.04 0.03 0.03 0.02 0.02 0.02|
||rV|None Simplified Detailed|0.05 0.07 0.03 0.05 0.04 0.06 0.03 0.05 0.03 0.04 0.04 0.04|
||rO|None Simplified Detailed|0.04 0.05 0.10 0.06 0.02 0.04 0.06 0.04 0.01 0.03 0.03 0.02|
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|Len 100 1.05 0.57 1.09 0.81 sQ Len 500 0.71 0.49 0.69 0.57 Len 1000 0.65 0.48 0.54 0.52 Unpopular 1.84 0.86 1.20 1.20 Len 100 0.76 0.77 0.94 0.77 sK Len 500 0.53 0.46 0.56 0.49 Len 1000 0.50 0.37 0.46 0.42 Unpopular 1.60 1.17 0.95 1.21 Len 100 3.02 2.84 1.42 2.58 sV Len 500 2.38 2.07 1.03 1.94 Len 1000 2.05 1.73 0.89 1.65 Unpopular 3.96 3.47 1.89 3.27 Len 100 2.63 1.76 1.87 1.94 sO Len 500 1.97 1.50 1.45 1.57 Wiki Len 1000 1.76 1.32 1.23 1.38 Unpopular 3.26 1.88 1.96 2.15 Len 100 0.03 0.02 0.02 0.02 rQ Len 500 0.02 0.02 0.01 0.02 Len 1000 0.02 0.02 0.01 0.02 Unpopular 0.03 0.01 0.03 0.02 Len 100 0.04 0.02 0.02 0.02 rK Len 500 0.03 0.01 0.01 0.02 Len 1000 0.03 0.01 0.01 0.02 Unpopular 0.03 0.02 0.03 0.02 Len 100 0.03 0.05 0.08 0.05 rV Len 500 0.02 0.04 0.05 0.03 Len 1000 0.02 0.03 0.04 0.03 Unpopular 0.04 0.05 0.09 0.05 Len 100 0.02 0.04 0.08 0.04 rO Len 500 0.01 0.02 0.04 0.02 Len 1000 0.01 0.02 0.03 0.02 Unpopular 0.02 0.04 0.08 0.04|sQ|Len 100 Len 500 Len 1000 Unpopular|1.05 0.57 1.09 0.81 0.71 0.49 0.69 0.57 0.65 0.48 0.54 0.52 1.84 0.86 1.20 1.20|
||sK|Len 100 Len 500 Len 1000 Unpopular|0.76 0.77 0.94 0.77 0.53 0.46 0.56 0.49 0.50 0.37 0.46 0.42 1.60 1.17 0.95 1.21|
||sV|Len 100 Len 500 Len 1000 Unpopular|3.02 2.84 1.42 2.58 2.38 2.07 1.03 1.94 2.05 1.73 0.89 1.65 3.96 3.47 1.89 3.27|
||sO|Len 100 Len 500 Len 1000 Unpopular|2.63 1.76 1.87 1.94 1.97 1.50 1.45 1.57 1.76 1.32 1.23 1.38 3.26 1.88 1.96 2.15|
||rQ|Len 100 Len 500 Len 1000 Unpopular|0.03 0.02 0.02 0.02 0.02 0.02 0.01 0.02 0.02 0.02 0.01 0.02 0.03 0.01 0.03 0.02|
||rK|Len 100 Len 500 Len 1000 Unpopular|0.04 0.02 0.02 0.02 0.03 0.01 0.01 0.02 0.03 0.01 0.01 0.02 0.03 0.02 0.03 0.02|
||rV|Len 100 Len 500 Len 1000 Unpopular|0.03 0.05 0.08 0.05 0.02 0.04 0.05 0.03 0.02 0.03 0.04 0.03 0.04 0.05 0.09 0.05|
||rO|Len 100 Len 500 Len 1000 Unpopular|0.02 0.04 0.08 0.04 0.01 0.02 0.04 0.02 0.01 0.02 0.03 0.02 0.02 0.04 0.08 0.04|
-----
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|Simplified 0.93 0.69 0.63 0.71 sQ Detailed 0.68 0.42 0.53 0.50 Simplified 0.77 0.54 0.67 0.60 sK Detailed 0.56 0.38 0.56 0.46 Simplified 3.40 2.78 0.92 2.63 sV Detailed 2.18 1.84 0.76 1.75 Algebra Simplified 3.09 1.70 1.33 1.97 sO Detailed 1.74 1.25 0.84 1.26 Simplified 0.03 0.03 0.02 0.03 rQ Detailed 0.03 0.02 0.02 0.02 Simplified 0.03 0.02 0.03 0.02 rK Detailed 0.04 0.02 0.03 0.02 Simplified 0.03 0.04 0.05 0.04 rV Detailed 0.03 0.03 0.04 0.03 Simplified 0.01 0.02 0.04 0.02 rO Detailed 0.01 0.01 0.03 0.02|sQ|Simplified Detailed|0.93 0.69 0.63 0.71 0.68 0.42 0.53 0.50|
||sK|Simplified Detailed|0.77 0.54 0.67 0.60 0.56 0.38 0.56 0.46|
||sV|Simplified Detailed|3.40 2.78 0.92 2.63 2.18 1.84 0.76 1.75|
||sO|Simplified Detailed|3.09 1.70 1.33 1.97 1.74 1.25 0.84 1.26|
||rQ|Simplified Detailed|0.03 0.03 0.02 0.03 0.03 0.02 0.02 0.02|
||rK|Simplified Detailed|0.03 0.02 0.03 0.02 0.04 0.02 0.03 0.02|
||rV|Simplified Detailed|0.03 0.04 0.05 0.04 0.03 0.03 0.04 0.03|
||rO|Simplified Detailed|0.01 0.02 0.04 0.02 0.01 0.01 0.03 0.02|
##### Table 14: Statistical results for MATH-Algebra using gemma-2-2b on irrelevant responses.
Figure 26: Visualization for MATH-Algebra using gemma-2-2b on irrelevant responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
Simplified 0.96 0.77 0.78 0.80
_sQ_ Detailed 0.90 0.55 0.53 0.63
Simplified 0.74 0.63 0.58 0.63
_sK_ Detailed 0.73 0.45 0.49 0.54
Simplified 3.60 2.74 1.05 2.67
_sV_ Detailed 2.73 1.99 0.98 2.01
Counting Simplified 3.25 1.75 1.34 2.02
_sO_ Detailed 2.11 1.40 0.89 1.43
Simplified 0.03 0.03 0.02 0.03
_rQ_ Detailed 0.03 0.03 0.02 0.03
Simplified 0.04 0.02 0.03 0.03
_rK_ Detailed 0.04 0.02 0.03 0.03
Simplified 0.03 0.04 0.05 0.04
_rV_ Detailed 0.03 0.03 0.04 0.03
Simplified 0.01 0.02 0.03 0.02
_rO_ Detailed 0.01 0.01 0.02 0.02
##### Table 15: Statistical results for MATH-Counting using gemma-2-2b on irrelevant responses.
##### Figure 27: Visualization for MATH-Counting using gemma-2-2b on irrelevant responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
Simplified 0.88 0.70 0.67 0.73
_sQ_ Detailed 0.90 0.39 0.75 0.61
Simplified 0.74 0.50 0.55 0.57
_sK_ Detailed 0.76 0.52 0.75 0.64
Simplified 2.95 2.44 0.55 2.23
_sV_ Detailed 2.78 2.32 0.84 2.14
Geometry Simplified 2.61 1.59 1.03 1.71
_sO_ Detailed 2.10 1.54 1.08 1.53
Simplified 0.03 0.03 0.02 0.03
_rQ_ Detailed 0.03 0.02 0.04 0.03
Simplified 0.03 0.02 0.03 0.02
_rK_ Detailed 0.04 0.02 0.04 0.03
Simplified 0.03 0.04 0.05 0.03
_rV_ Detailed 0.03 0.03 0.04 0.03
Simplified 0.01 0.02 0.03 0.02
_rO_ Detailed 0.01 0.02 0.03 0.02
##### Table 16: Statistical results for MATH-Geometry using gemma-2-2b on irrelevant responses.
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|Simplified 0.88 0.70 0.67 0.73 sQ Detailed 0.90 0.39 0.75 0.61 Simplified 0.74 0.50 0.55 0.57 sK Detailed 0.76 0.52 0.75 0.64 Simplified 2.95 2.44 0.55 2.23 sV Detailed 2.78 2.32 0.84 2.14 Geometry Simplified 2.61 1.59 1.03 1.71 sO Detailed 2.10 1.54 1.08 1.53 Simplified 0.03 0.03 0.02 0.03 rQ Detailed 0.03 0.02 0.04 0.03 Simplified 0.03 0.02 0.03 0.02 rK Detailed 0.04 0.02 0.04 0.03 Simplified 0.03 0.04 0.05 0.03 rV Detailed 0.03 0.03 0.04 0.03 Simplified 0.01 0.02 0.03 0.02 rO Detailed 0.01 0.02 0.03 0.02|sQ|Simplified Detailed|0.88 0.70 0.67 0.73 0.90 0.39 0.75 0.61|
||sK|Simplified Detailed|0.74 0.50 0.55 0.57 0.76 0.52 0.75 0.64|
||sO|Simplified Detailed|2.61 1.59 1.03 1.71 2.10 1.54 1.08 1.53|
||rQ|Simplified Detailed|0.03 0.03 0.02 0.03 0.03 0.02 0.04 0.03|
||rK|Simplified Detailed|0.03 0.02 0.03 0.02 0.04 0.02 0.04 0.03|
||rV|Simplified Detailed|0.03 0.04 0.05 0.03 0.03 0.03 0.04 0.03|
||rO|Simplified Detailed|0.01 0.02 0.03 0.02 0.01 0.02 0.03 0.02|
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|Simplified 0.96 0.77 0.78 0.80 sQ Detailed 0.90 0.55 0.53 0.63 Simplified 0.74 0.63 0.58 0.63 sK Detailed 0.73 0.45 0.49 0.54 Simplified 3.60 2.74 1.05 2.67 sV Detailed 2.73 1.99 0.98 2.01 Counting Simplified 3.25 1.75 1.34 2.02 sO Detailed 2.11 1.40 0.89 1.43 Simplified 0.03 0.03 0.02 0.03 rQ Detailed 0.03 0.03 0.02 0.03 Simplified 0.04 0.02 0.03 0.03 rK Detailed 0.04 0.02 0.03 0.03 Simplified 0.03 0.04 0.05 0.04 rV Detailed 0.03 0.03 0.04 0.03 Simplified 0.01 0.02 0.03 0.02 rO Detailed 0.01 0.01 0.02 0.02|sQ|Simplified Detailed|0.96 0.77 0.78 0.80 0.90 0.55 0.53 0.63|
||sK|Simplified Detailed|0.74 0.63 0.58 0.63 0.73 0.45 0.49 0.54|
||sV|Simplified Detailed|3.60 2.74 1.05 2.67 2.73 1.99 0.98 2.01|
||sO|Simplified Detailed|3.25 1.75 1.34 2.02 2.11 1.40 0.89 1.43|
||rQ|Simplified Detailed|0.03 0.03 0.02 0.03 0.03 0.03 0.02 0.03|
||rK|Simplified Detailed|0.04 0.02 0.03 0.03 0.04 0.02 0.03 0.03|
||rV|Simplified Detailed|0.03 0.04 0.05 0.04 0.03 0.03 0.04 0.03|
||rO|Simplified Detailed|0.01 0.02 0.03 0.02 0.01 0.01 0.02 0.02|
-----
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 111371.56 29719.68 9.10 48455.22 sQ Simplified 2.75 1.19 1.52 1.67 Detailed 1.54 0.70 0.85 0.91 None 131198.87 34747.63 13.72 53897.70 sK Simplified 2.24 1.65 1.48 1.72 Detailed 1.18 0.83 0.68 0.84 None 683220.41 208522.69 69.15 289494.88 sV Simplified 6.23 4.76 5.15 5.23 Detailed 4.16 2.68 1.97 2.95 None 311685.32 92564.91 17.60 124625.79 StrategyQA sO Simplified 5.32 2.29 2.00 2.98 Detailed 3.14 1.84 1.54 2.07 None 0.02 0.06 0.08 0.06 rQ Simplified 0.03 0.03 0.03 0.03 Detailed 0.04 0.02 0.03 0.03 None 0.02 0.03 0.05 0.03 rK Simplified 0.04 0.02 0.03 0.03 Detailed 0.05 0.02 0.03 0.03 None 0.07 0.09 0.04 0.07 rV Simplified 0.04 0.06 0.06 0.05 Detailed 0.03 0.04 0.03 0.03 None 0.05 0.03 0.07 0.04 rO Simplified 0.02 0.03 0.06 0.03 Detailed 0.02 0.02 0.02 0.02|sQ|None Simplified Detailed|111371.56 29719.68 9.10 48455.22 2.75 1.19 1.52 1.67 1.54 0.70 0.85 0.91|
||sK|None Simplified Detailed|131198.87 34747.63 13.72 53897.70 2.24 1.65 1.48 1.72 1.18 0.83 0.68 0.84|
||sV|None Simplified Detailed|683220.41 208522.69 69.15 289494.88 6.23 4.76 5.15 5.23 4.16 2.68 1.97 2.95|
||sO|None Simplified Detailed|311685.32 92564.91 17.60 124625.79 5.32 2.29 2.00 2.98 3.14 1.84 1.54 2.07|
||rQ|None Simplified Detailed|0.02 0.06 0.08 0.06 0.03 0.03 0.03 0.03 0.04 0.02 0.03 0.03|
||rK|None Simplified Detailed|0.02 0.03 0.05 0.03 0.04 0.02 0.03 0.03 0.05 0.02 0.03 0.03|
||rV|None Simplified Detailed|0.07 0.09 0.04 0.07 0.04 0.06 0.06 0.05 0.03 0.04 0.03 0.03|
||rO|None Simplified Detailed|0.05 0.03 0.07 0.04 0.02 0.03 0.06 0.03 0.02 0.02 0.02 0.02|
##### Figure 28: Visualization for MATH-Geometry using gemma-2-2b on irrelevant responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
None 3.41 2.11 3.47 2.83
_sQ_ Simplified 1.65 1.06 0.90 1.17
Detailed 0.95 0.53 0.69 0.65
None 4.80 2.75 3.41 3.44
_sK_ Simplified 1.77 1.11 0.88 1.22
Detailed 0.85 0.72 0.72 0.72
None 9.50 9.88 2.35 8.45
_sV_ Simplified 4.19 3.19 1.00 3.07
Detailed 2.93 2.42 1.13 2.33
None 10.25 4.43 1.92 5.27
AQuA _sO_ Simplified 3.87 2.19 0.94 2.26
Detailed 2.46 1.58 1.14 1.68
None 0.03 0.09 0.10 0.08
_rQ_ Simplified 0.02 0.03 0.02 0.02
Detailed 0.03 0.03 0.02 0.03
None 0.04 0.05 0.09 0.05
_rK_ Simplified 0.03 0.02 0.03 0.02
Detailed 0.03 0.02 0.03 0.03
None 0.04 0.07 0.03 0.05
_rV_ Simplified 0.03 0.04 0.05 0.04
Detailed 0.03 0.03 0.04 0.03
None 0.04 0.05 0.07 0.05
_rO_ Simplified 0.01 0.03 0.04 0.03
Detailed 0.01 0.02 0.03 0.02
##### Table 17: Statistical results for AQuA using gemma-2- 2b on irrelevant responses.
##### Table 18: Statistical results for StrategyQA using gemma-2-2b on irrelevant responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
None 14.09 5.80 9.27 8.31
_sQ_ Simplified 2.51 1.18 1.55 1.58
Detailed 1.29 0.81 0.71 0.89
None 13.60 10.84 12.96 11.64
_sK_ Simplified 2.03 2.11 1.23 1.85
Detailed 1.02 0.98 0.67 0.89
None 40.86 44.16 41.30 43.71
_sV_ Simplified 7.26 6.21 6.16 6.61
Detailed 4.19 2.79 2.39 3.11
None 38.03 16.51 4.59 18.87
ECQA _sO_ Simplified 6.74 2.85 2.00 3.50
Detailed 3.32 1.98 1.17 2.12
None 0.02 0.05 0.08 0.05
_rQ_ Simplified 0.03 0.03 0.02 0.02
Detailed 0.03 0.03 0.03 0.03
None 0.03 0.03 0.07 0.04
_rK_ Simplified 0.03 0.02 0.03 0.02
Detailed 0.04 0.03 0.03 0.03
None 0.05 0.07 0.06 0.06
_rV_ Simplified 0.05 0.06 0.04 0.05
Detailed 0.04 0.04 0.04 0.03
None 0.03 0.04 0.06 0.04
_rO_ Simplified 0.02 0.03 0.05 0.03
Detailed 0.01 0.02 0.02 0.02
##### Table 19: Statistical results for ECQA using gemma-2- 2b on irrelevant responses.
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 3.41 2.11 3.47 2.83 sQ Simplified 1.65 1.06 0.90 1.17 Detailed 0.95 0.53 0.69 0.65 None 4.80 2.75 3.41 3.44 sK Simplified 1.77 1.11 0.88 1.22 Detailed 0.85 0.72 0.72 0.72 None 9.50 9.88 2.35 8.45 sV Simplified 4.19 3.19 1.00 3.07 Detailed 2.93 2.42 1.13 2.33 None 10.25 4.43 1.92 5.27 AQuA sO Simplified 3.87 2.19 0.94 2.26 Detailed 2.46 1.58 1.14 1.68 None 0.03 0.09 0.10 0.08 rQ Simplified 0.02 0.03 0.02 0.02 Detailed 0.03 0.03 0.02 0.03 None 0.04 0.05 0.09 0.05 rK Simplified 0.03 0.02 0.03 0.02 Detailed 0.03 0.02 0.03 0.03 None 0.04 0.07 0.03 0.05 rV Simplified 0.03 0.04 0.05 0.04 Detailed 0.03 0.03 0.04 0.03 None 0.04 0.05 0.07 0.05 rO Simplified 0.01 0.03 0.04 0.03 Detailed 0.01 0.02 0.03 0.02|sQ|None Simplified Detailed|3.41 2.11 3.47 2.83 1.65 1.06 0.90 1.17 0.95 0.53 0.69 0.65|
||sK|None Simplified Detailed|4.80 2.75 3.41 3.44 1.77 1.11 0.88 1.22 0.85 0.72 0.72 0.72|
||sV|None Simplified Detailed|9.50 9.88 2.35 8.45 4.19 3.19 1.00 3.07 2.93 2.42 1.13 2.33|
||sO|None Simplified Detailed|10.25 4.43 1.92 5.27 3.87 2.19 0.94 2.26 2.46 1.58 1.14 1.68|
||rQ|None Simplified Detailed|0.03 0.09 0.10 0.08 0.02 0.03 0.02 0.02 0.03 0.03 0.02 0.03|
||rK|None Simplified Detailed|0.04 0.05 0.09 0.05 0.03 0.02 0.03 0.02 0.03 0.02 0.03 0.03|
||rV|None Simplified Detailed|0.04 0.07 0.03 0.05 0.03 0.04 0.05 0.04 0.03 0.03 0.04 0.03|
||rO|None Simplified Detailed|0.04 0.05 0.07 0.05 0.01 0.03 0.04 0.03 0.01 0.02 0.03 0.02|
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 14.09 5.80 9.27 8.31 sQ Simplified 2.51 1.18 1.55 1.58 Detailed 1.29 0.81 0.71 0.89 None 13.60 10.84 12.96 11.64 sK Simplified 2.03 2.11 1.23 1.85 Detailed 1.02 0.98 0.67 0.89 None 40.86 44.16 41.30 43.71 sV Simplified 7.26 6.21 6.16 6.61 Detailed 4.19 2.79 2.39 3.11 None 38.03 16.51 4.59 18.87 ECQA sO Simplified 6.74 2.85 2.00 3.50 Detailed 3.32 1.98 1.17 2.12 None 0.02 0.05 0.08 0.05 rQ Simplified 0.03 0.03 0.02 0.02 Detailed 0.03 0.03 0.03 0.03 None 0.03 0.03 0.07 0.04 rK Simplified 0.03 0.02 0.03 0.02 Detailed 0.04 0.03 0.03 0.03 None 0.05 0.07 0.06 0.06 rV Simplified 0.05 0.06 0.04 0.05 Detailed 0.04 0.04 0.04 0.03 None 0.03 0.04 0.06 0.04 rO Simplified 0.02 0.03 0.05 0.03 Detailed 0.01 0.02 0.02 0.02|sQ|None Simplified Detailed|14.09 5.80 9.27 8.31 2.51 1.18 1.55 1.58 1.29 0.81 0.71 0.89|
||sK|None Simplified Detailed|13.60 10.84 12.96 11.64 2.03 2.11 1.23 1.85 1.02 0.98 0.67 0.89|
||sV|None Simplified Detailed|40.86 44.16 41.30 43.71 7.26 6.21 6.16 6.61 4.19 2.79 2.39 3.11|
||sO|None Simplified Detailed|38.03 16.51 4.59 18.87 6.74 2.85 2.00 3.50 3.32 1.98 1.17 2.12|
||rQ|None Simplified Detailed|0.02 0.05 0.08 0.05 0.03 0.03 0.02 0.02 0.03 0.03 0.03 0.03|
||rK|None Simplified Detailed|0.03 0.03 0.07 0.04 0.03 0.02 0.03 0.02 0.04 0.03 0.03 0.03|
||rV|None Simplified Detailed|0.05 0.07 0.06 0.06 0.05 0.06 0.04 0.05 0.04 0.04 0.04 0.03|
||rO|None Simplified Detailed|0.03 0.04 0.06 0.04 0.02 0.03 0.05 0.03 0.01 0.02 0.02 0.02|
-----
##### Figure 29: Visualization for AQuA using gemma-2-2b on irrelevant responses.
Figure 30: Visualization for StrategyQA using gemma-2-2b on irrelevant responses.
-----
##### Figure 31: Visualization for ECQA using gemma-2-2b on irrelevant responses.
Figure 32: Visualization for CREAK using gemma-2-2b on irrelevant responses.
-----
##### Figure 33: Visualization for Sensemaking using gemma-2-2b on irrelevant responses.
Figure 34: Visualization for Wiki tasks using gemma-2-2b on irrelevant responses.
-----
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 16.17 7.68 10.07 10.13 sQ Simplified 2.55 1.20 1.40 1.59 Detailed 1.43 0.76 0.84 0.92 None 18.79 11.05 12.76 13.34 sK Simplified 2.19 1.97 1.18 1.81 Detailed 1.16 0.98 0.58 0.90 None 41.27 41.45 15.38 37.26 sV Simplified 7.35 6.05 5.39 6.39 Detailed 4.24 2.80 2.02 3.03 None 41.08 15.00 4.02 18.42 CREAK sO Simplified 6.97 2.68 2.17 3.55 Detailed 3.35 1.95 1.37 2.13 None 0.02 0.08 0.11 0.08 rQ Simplified 0.02 0.03 0.03 0.03 Detailed 0.03 0.03 0.02 0.03 None 0.03 0.03 0.07 0.04 rK Simplified 0.03 0.02 0.03 0.02 Detailed 0.04 0.02 0.03 0.03 None 0.05 0.07 0.03 0.05 rV Simplified 0.05 0.06 0.05 0.05 Detailed 0.03 0.04 0.04 0.03 None 0.04 0.05 0.10 0.06 rO Simplified 0.02 0.04 0.05 0.03 Detailed 0.02 0.02 0.03 0.02|sQ|None Simplified Detailed|16.17 7.68 10.07 10.13 2.55 1.20 1.40 1.59 1.43 0.76 0.84 0.92|
||sK|None Simplified Detailed|18.79 11.05 12.76 13.34 2.19 1.97 1.18 1.81 1.16 0.98 0.58 0.90|
||sV|None Simplified Detailed|41.27 41.45 15.38 37.26 7.35 6.05 5.39 6.39 4.24 2.80 2.02 3.03|
||sO|None Simplified Detailed|41.08 15.00 4.02 18.42 6.97 2.68 2.17 3.55 3.35 1.95 1.37 2.13|
||rQ|None Simplified Detailed|0.02 0.08 0.11 0.08 0.02 0.03 0.03 0.03 0.03 0.03 0.02 0.03|
||rK|None Simplified Detailed|0.03 0.03 0.07 0.04 0.03 0.02 0.03 0.02 0.04 0.02 0.03 0.03|
||rV|None Simplified Detailed|0.05 0.07 0.03 0.05 0.05 0.06 0.05 0.05 0.03 0.04 0.04 0.03|
||rO|None Simplified Detailed|0.04 0.05 0.10 0.06 0.02 0.04 0.05 0.03 0.02 0.02 0.03 0.02|
##### Table 20: Statistical results for CREAK using gemma- 2-2b on irrelevant responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
None 9.03 4.66 6.26 6.48
_sQ_ Simplified 2.64 1.72 2.34 2.06
Detailed 1.31 1.02 0.73 1.04
None 7.64 8.40 10.64 8.79
_sK_ Simplified 1.96 2.21 2.12 2.09
Detailed 1.07 1.10 0.80 1.01
None 23.14 25.62 12.45 23.23
_sV_ Simplified 8.12 7.29 5.94 7.49
Detailed 3.79 2.75 2.44 2.98
None 24.94 9.50 4.30 11.66
Sensemaking _sO_ Simplified 7.80 3.16 2.30 4.04
Detailed 2.94 2.05 1.20 2.07
None 0.02 0.06 0.07 0.06
_rQ_ Simplified 0.03 0.04 0.08 0.05
Detailed 0.03 0.04 0.03 0.04
None 0.01 0.04 0.09 0.04
_rK_ Simplified 0.03 0.02 0.06 0.03
Detailed 0.04 0.03 0.04 0.03
None 0.05 0.07 0.04 0.06
_rV_ Simplified 0.05 0.05 0.04 0.04
Detailed 0.03 0.04 0.03 0.03
None 0.04 0.05 0.10 0.05
_rO_ Simplified 0.03 0.03 0.05 0.03
Detailed 0.01 0.02 0.02 0.02
##### Table 21: Statistical results for Sensemaking using gemma-2-2b on irrelevant responses.
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|Len 100 2.37 1.03 1.23 1.42 sQ Len 500 1.63 0.66 0.91 0.96 Len 1000 1.37 0.53 0.79 0.80 Unpopular 3.46 1.27 1.31 1.84 Len 100 1.93 1.26 1.19 1.39 sK Len 500 1.32 0.71 0.83 0.87 Len 1000 1.11 0.55 0.73 0.72 Unpopular 2.92 1.75 1.20 1.89 Len 100 4.86 4.01 2.83 3.95 sV Len 500 3.76 2.93 1.94 2.93 Len 1000 3.29 2.51 1.62 2.53 Unpopular 5.82 4.63 3.93 4.76 Len 100 4.18 2.28 2.43 2.73 sO Len 500 3.06 1.86 2.23 2.22 Wiki Len 1000 2.70 1.59 2.07 1.97 Unpopular 5.14 2.49 3.05 3.15 Len 100 0.04 0.03 0.02 0.03 rQ Len 500 0.04 0.02 0.03 0.03 Len 1000 0.04 0.02 0.03 0.03 Unpopular 0.03 0.02 0.02 0.02 Len 100 0.04 0.02 0.03 0.03 rK Len 500 0.05 0.02 0.03 0.03 Len 1000 0.05 0.01 0.03 0.03 Unpopular 0.04 0.02 0.02 0.02 Len 100 0.04 0.06 0.07 0.05 rV Len 500 0.03 0.04 0.05 0.04 Len 1000 0.02 0.03 0.04 0.03 Unpopular 0.04 0.06 0.08 0.05 Len 100 0.02 0.03 0.07 0.03 rO Len 500 0.02 0.02 0.04 0.02 Len 1000 0.02 0.02 0.03 0.02 Unpopular 0.02 0.03 0.07 0.04|sQ|Len 100 Len 500 Len 1000 Unpopular|2.37 1.03 1.23 1.42 1.63 0.66 0.91 0.96 1.37 0.53 0.79 0.80 3.46 1.27 1.31 1.84|
||sK|Len 100 Len 500 Len 1000 Unpopular|1.93 1.26 1.19 1.39 1.32 0.71 0.83 0.87 1.11 0.55 0.73 0.72 2.92 1.75 1.20 1.89|
||sV|Len 100 Len 500 Len 1000 Unpopular|4.86 4.01 2.83 3.95 3.76 2.93 1.94 2.93 3.29 2.51 1.62 2.53 5.82 4.63 3.93 4.76|
||sO|Len 100 Len 500 Len 1000 Unpopular|4.18 2.28 2.43 2.73 3.06 1.86 2.23 2.22 2.70 1.59 2.07 1.97 5.14 2.49 3.05 3.15|
||rQ|Len 100 Len 500 Len 1000 Unpopular|0.04 0.03 0.02 0.03 0.04 0.02 0.03 0.03 0.04 0.02 0.03 0.03 0.03 0.02 0.02 0.02|
||rK|Len 100 Len 500 Len 1000 Unpopular|0.04 0.02 0.03 0.03 0.05 0.02 0.03 0.03 0.05 0.01 0.03 0.03 0.04 0.02 0.02 0.02|
||rV|Len 100 Len 500 Len 1000 Unpopular|0.04 0.06 0.07 0.05 0.03 0.04 0.05 0.04 0.02 0.03 0.04 0.03 0.04 0.06 0.08 0.05|
||rO|Len 100 Len 500 Len 1000 Unpopular|0.02 0.03 0.07 0.03 0.02 0.02 0.04 0.02 0.02 0.02 0.03 0.02 0.02 0.03 0.07 0.04|
##### Table 22: Statistical results for Wiki using gemma-2-2b on irrelevant responses.
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 9.03 4.66 6.26 6.48 sQ Simplified 2.64 1.72 2.34 2.06 Detailed 1.31 1.02 0.73 1.04 None 7.64 8.40 10.64 8.79 sK Simplified 1.96 2.21 2.12 2.09 Detailed 1.07 1.10 0.80 1.01 None 23.14 25.62 12.45 23.23 sV Simplified 8.12 7.29 5.94 7.49 Detailed 3.79 2.75 2.44 2.98 None 24.94 9.50 4.30 11.66 Sensemaking sO Simplified 7.80 3.16 2.30 4.04 Detailed 2.94 2.05 1.20 2.07 None 0.02 0.06 0.07 0.06 rQ Simplified 0.03 0.04 0.08 0.05 Detailed 0.03 0.04 0.03 0.04 None 0.01 0.04 0.09 0.04 rK Simplified 0.03 0.02 0.06 0.03 Detailed 0.04 0.03 0.04 0.03 None 0.05 0.07 0.04 0.06 rV Simplified 0.05 0.05 0.04 0.04 Detailed 0.03 0.04 0.03 0.03 None 0.04 0.05 0.10 0.05 rO Simplified 0.03 0.03 0.05 0.03 Detailed 0.01 0.02 0.02 0.02|sQ|None Simplified Detailed|9.03 4.66 6.26 6.48 2.64 1.72 2.34 2.06 1.31 1.02 0.73 1.04|
||sK|None Simplified Detailed|7.64 8.40 10.64 8.79 1.96 2.21 2.12 2.09 1.07 1.10 0.80 1.01|
||sV|None Simplified Detailed|23.14 25.62 12.45 23.23 8.12 7.29 5.94 7.49 3.79 2.75 2.44 2.98|
||sO|None Simplified Detailed|24.94 9.50 4.30 11.66 7.80 3.16 2.30 4.04 2.94 2.05 1.20 2.07|
||rQ|None Simplified Detailed|0.02 0.06 0.07 0.06 0.03 0.04 0.08 0.05 0.03 0.04 0.03 0.04|
||rK|None Simplified Detailed|0.01 0.04 0.09 0.04 0.03 0.02 0.06 0.03 0.04 0.03 0.04 0.03|
||rV|None Simplified Detailed|0.05 0.07 0.04 0.06 0.05 0.05 0.04 0.04 0.03 0.04 0.03 0.03|
||rO|None Simplified Detailed|0.04 0.05 0.10 0.05 0.03 0.03 0.05 0.03 0.01 0.02 0.02 0.02|
-----
#### B.3 Instructed LLM on Correct Responses
B.3.1 Reasoning Tasks
The visualizations and statistical results on MATH tasks: MATH-Algebra (Figure 155, Table 23), MATH-Counting (Figure 156, Table 24), MATH- Geometry (Figure 157, Table 25).
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
Simplified 0.56 0.82 1.02 0.84
_sQ_ Detailed 0.36 0.53 0.73 0.57
Simplified 0.57 0.67 0.74 0.67
_sK_ Detailed 0.35 0.52 0.59 0.51
Simplified 2.47 2.87 0.73 2.36
_sV_ Detailed 1.35 1.70 0.65 1.40
Algebra Simplified 1.99 1.89 0.76 1.68
_sO_ Detailed 1.01 1.17 0.43 0.97
Simplified 0.02 0.02 0.07 0.04
_rQ_ Detailed 0.02 0.03 0.08 0.05
Simplified 0.03 0.02 0.05 0.03
_rK_ Detailed 0.04 0.03 0.05 0.04
Simplified 0.03 0.05 0.05 0.04
_rV_ Detailed 0.02 0.04 0.04 0.04
Simplified 0.01 0.02 0.04 0.02
_rO_ Detailed 0.01 0.02 0.04 0.02
##### Table 23: Statistical results for MATH-Algebra using gemma-2-2b-it on correct responses.
Figure 35: Visualization for MATH-Algebra using gemma-2-2b-it on correct responses.
#### The visualizations and statistical results on other reasoning tasks: AQuA (Figure 158, Table 26), GSM8K (Figure 159, Table 27), StrategyQA (Fig- ure 160, Table 28), ECQA (Figure 161, Table 29), CREAK (Figure 162, Table 30), Sensemaking (Fig- ure 163, Table 31).
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|Simplified 0.55 0.75 0.67 0.70 sQ Detailed 0.40 0.62 0.55 0.57 Simplified 0.51 0.66 0.54 0.60 sK Detailed 0.36 0.55 0.42 0.49 Simplified 2.52 2.61 0.76 2.22 sV Detailed 1.52 1.72 0.67 1.45 Counting Simplified 2.10 1.71 0.75 1.60 sO Detailed 1.17 1.20 0.44 1.02 Simplified 0.02 0.02 0.05 0.03 rQ Detailed 0.02 0.03 0.07 0.04 Simplified 0.03 0.02 0.04 0.03 rK Detailed 0.04 0.03 0.05 0.04 Simplified 0.02 0.05 0.05 0.04 rV Detailed 0.02 0.05 0.04 0.04 Simplified 0.01 0.03 0.04 0.02 rO Detailed 0.01 0.02 0.03 0.02|sQ|Simplified Detailed|0.55 0.75 0.67 0.70 0.40 0.62 0.55 0.57|
||sK|Simplified Detailed|0.51 0.66 0.54 0.60 0.36 0.55 0.42 0.49|
||sV|Simplified Detailed|2.52 2.61 0.76 2.22 1.52 1.72 0.67 1.45|
||sO|Simplified Detailed|2.10 1.71 0.75 1.60 1.17 1.20 0.44 1.02|
||rQ|Simplified Detailed|0.02 0.02 0.05 0.03 0.02 0.03 0.07 0.04|
||rK|Simplified Detailed|0.03 0.02 0.04 0.03 0.04 0.03 0.05 0.04|
||rV|Simplified Detailed|0.02 0.05 0.05 0.04 0.02 0.05 0.04 0.04|
||rO|Simplified Detailed|0.01 0.03 0.04 0.02 0.01 0.02 0.03 0.02|
##### Table 24: Statistical results for MATH-Counting using gemma-2-2b-it on correct responses.
Figure 36: Visualization for MATH-Counting using gemma-2-2b-it on correct responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
Simplified 0.55 0.76 0.94 0.78
_sQ_ Detailed 0.41 0.57 0.60 0.56
Simplified 0.47 0.53 0.72 0.58
_sK_ Detailed 0.36 0.52 0.49 0.49
Simplified 2.11 2.00 0.48 1.74
_sV_ Detailed 1.53 1.65 0.57 1.40
Geometry Simplified 1.72 1.53 0.66 1.39
_sO_ Detailed 1.12 1.16 0.51 1.01
Simplified 0.02 0.02 0.05 0.03
_rQ_ Detailed 0.02 0.02 0.07 0.04
Simplified 0.03 0.02 0.04 0.03
_rK_ Detailed 0.04 0.02 0.05 0.04
Simplified 0.02 0.04 0.05 0.04
_rV_ Detailed 0.02 0.04 0.04 0.04
Simplified 0.01 0.02 0.03 0.02
_rO_ Detailed 0.01 0.02 0.04 0.02
##### Table 25: Statistical results for MATH-Geometry using gemma-2-2b-it on correct responses.
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|Simplified 0.56 0.82 1.02 0.84 sQ Detailed 0.36 0.53 0.73 0.57 Simplified 0.57 0.67 0.74 0.67 sK Detailed 0.35 0.52 0.59 0.51 Simplified 2.47 2.87 0.73 2.36 sV Detailed 1.35 1.70 0.65 1.40 Algebra Simplified 1.99 1.89 0.76 1.68 sO Detailed 1.01 1.17 0.43 0.97 Simplified 0.02 0.02 0.07 0.04 rQ Detailed 0.02 0.03 0.08 0.05 Simplified 0.03 0.02 0.05 0.03 rK Detailed 0.04 0.03 0.05 0.04 Simplified 0.03 0.05 0.05 0.04 rV Detailed 0.02 0.04 0.04 0.04 Simplified 0.01 0.02 0.04 0.02 rO Detailed 0.01 0.02 0.04 0.02|sQ|Simplified Detailed|0.56 0.82 1.02 0.84 0.36 0.53 0.73 0.57|
||sK|Simplified Detailed|0.57 0.67 0.74 0.67 0.35 0.52 0.59 0.51|
||sV|Simplified Detailed|2.47 2.87 0.73 2.36 1.35 1.70 0.65 1.40|
||sO|Simplified Detailed|1.99 1.89 0.76 1.68 1.01 1.17 0.43 0.97|
||rQ|Simplified Detailed|0.02 0.02 0.07 0.04 0.02 0.03 0.08 0.05|
||rK|Simplified Detailed|0.03 0.02 0.05 0.03 0.04 0.03 0.05 0.04|
||rV|Simplified Detailed|0.03 0.05 0.05 0.04 0.02 0.04 0.04 0.04|
||rO|Simplified Detailed|0.01 0.02 0.04 0.02 0.01 0.02 0.04 0.02|
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|Simplified 0.55 0.76 0.94 0.78 sQ Detailed 0.41 0.57 0.60 0.56 Simplified 0.47 0.53 0.72 0.58 sK Detailed 0.36 0.52 0.49 0.49 Simplified 2.11 2.00 0.48 1.74 sV Detailed 1.53 1.65 0.57 1.40 Geometry Simplified 1.72 1.53 0.66 1.39 sO Detailed 1.12 1.16 0.51 1.01 Simplified 0.02 0.02 0.05 0.03 rQ Detailed 0.02 0.02 0.07 0.04 Simplified 0.03 0.02 0.04 0.03 rK Detailed 0.04 0.02 0.05 0.04 Simplified 0.02 0.04 0.05 0.04 rV Detailed 0.02 0.04 0.04 0.04 Simplified 0.01 0.02 0.03 0.02 rO Detailed 0.01 0.02 0.04 0.02|sQ|Simplified Detailed|0.55 0.76 0.94 0.78 0.41 0.57 0.60 0.56|
||sK|Simplified Detailed|0.47 0.53 0.72 0.58 0.36 0.52 0.49 0.49|
||sV|Simplified Detailed|2.11 2.00 0.48 1.74 1.53 1.65 0.57 1.40|
||sO|Simplified Detailed|1.72 1.53 0.66 1.39 1.12 1.16 0.51 1.01|
||rQ|Simplified Detailed|0.02 0.02 0.05 0.03 0.02 0.02 0.07 0.04|
||rK|Simplified Detailed|0.03 0.02 0.04 0.03 0.04 0.02 0.05 0.04|
||rV|Simplified Detailed|0.02 0.04 0.05 0.04 0.02 0.04 0.04 0.04|
||rO|Simplified Detailed|0.01 0.02 0.03 0.02 0.01 0.02 0.04 0.02|
-----
##### Figure 37: Visualization for MATH-Geometry using gemma-2-2b-it on correct responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
None 3.94 5.18 12.30 6.55
_sQ_ Simplified 1.18 1.42 2.39 1.68
Detailed 0.43 0.74 0.94 0.77
None 3.77 8.15 19.98 8.94
_sK_ Simplified 1.18 1.94 2.05 1.82
Detailed 0.39 0.80 0.71 0.70
None 16.91 24.85 8.20 19.14
_sV_ Simplified 4.70 5.12 1.32 4.20
Detailed 1.65 2.11 0.87 1.73
None 12.87 11.21 5.43 10.38
AQuA _sO_ Simplified 3.52 3.00 1.04 2.72
Detailed 1.25 1.36 0.48 1.14
None 0.03 0.05 0.10 0.05
_rQ_ Simplified 0.02 0.03 0.08 0.04
Detailed 0.02 0.04 0.09 0.05
None 0.04 0.03 0.04 0.04
_rK_ Simplified 0.04 0.02 0.04 0.03
Detailed 0.03 0.04 0.06 0.04
None 0.02 0.05 0.16 0.07
_rV_ Simplified 0.02 0.05 0.06 0.04
Detailed 0.02 0.05 0.04 0.04
None 0.01 0.05 0.08 0.04
_rO_ Simplified 0.01 0.04 0.04 0.03
Detailed 0.01 0.03 0.04 0.02
##### Table 26: Statistical results for AQuA using gemma-2- 2b-it on correct responses.
#### B.3.2 Wiki Tasks
The visualizations and statistical results on Wiki tasks are shown in Figure 164 and Table ??.
B.4 Instructed LLM on Irrelevant Responses
B.4.1 Reasoning Tasks
The visualizations and statistical results on MATH tasks: MATH-Algebra (Figure 165, Table 33), MATH-Counting (Figure 166, Table 34), MATH- Geometry (Figure 167, Table 35). The visualizations and statistical results on other
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 4.64 6.94 11.69 7.45 sQ Simplified 0.72 1.24 1.44 1.25 Detailed 0.41 0.89 1.11 0.91 None 4.28 10.24 15.36 9.16 sK Simplified 0.69 1.60 0.84 1.22 Detailed 0.36 1.04 0.71 0.81 None 25.54 33.15 13.66 26.66 sV Simplified 3.04 3.49 0.77 2.79 Detailed 1.79 2.38 0.96 1.91 None 19.49 14.00 10.25 14.57 GSM8K sO Simplified 2.35 2.36 0.38 1.93 Detailed 1.39 1.59 0.35 1.27 None 0.03 0.04 0.05 0.04 rQ Simplified 0.02 0.03 0.07 0.05 Detailed 0.02 0.05 0.09 0.06 None 0.04 0.03 0.05 0.04 rK Simplified 0.03 0.03 0.04 0.03 Detailed 0.04 0.04 0.06 0.05 None 0.03 0.06 0.10 0.06 rV Simplified 0.03 0.05 0.05 0.04 Detailed 0.03 0.06 0.04 0.04 None 0.02 0.06 0.07 0.05 rO Simplified 0.01 0.04 0.04 0.03 Detailed 0.01 0.03 0.04 0.03|sQ|None Simplified Detailed|4.64 6.94 11.69 7.45 0.72 1.24 1.44 1.25 0.41 0.89 1.11 0.91|
||sK|None Simplified Detailed|4.28 10.24 15.36 9.16 0.69 1.60 0.84 1.22 0.36 1.04 0.71 0.81|
||sV|None Simplified Detailed|25.54 33.15 13.66 26.66 3.04 3.49 0.77 2.79 1.79 2.38 0.96 1.91|
||sO|None Simplified Detailed|19.49 14.00 10.25 14.57 2.35 2.36 0.38 1.93 1.39 1.59 0.35 1.27|
||rQ|None Simplified Detailed|0.03 0.04 0.05 0.04 0.02 0.03 0.07 0.05 0.02 0.05 0.09 0.06|
||rK|None Simplified Detailed|0.04 0.03 0.05 0.04 0.03 0.03 0.04 0.03 0.04 0.04 0.06 0.05|
||rV|None Simplified Detailed|0.03 0.06 0.10 0.06 0.03 0.05 0.05 0.04 0.03 0.06 0.04 0.04|
||rO|None Simplified Detailed|0.02 0.06 0.07 0.05 0.01 0.04 0.04 0.03 0.01 0.03 0.04 0.03|
##### Table 27: Statistical results for GSM8K using gemma- 2-2b-it on correct responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
None 11.01 9.56 4.80 9.15
_sQ_ Simplified 1.47 1.48 2.23 1.69
Detailed 0.44 0.78 0.72 0.68
None 8.98 10.89 9.19 10.78
_sK_ Simplified 1.00 2.82 1.73 2.00
Detailed 0.36 0.64 0.44 0.52
None 48.02 63.95 29.91 52.33
_sV_ Simplified 5.55 6.20 1.57 5.05
Detailed 2.19 2.25 0.44 1.84
None 33.59 22.10 11.38 23.64
StrategyQA _sO_ Simplified 4.11 2.94 1.62 2.99
Detailed 1.68 1.59 0.78 1.42
None 0.02 0.06 0.07 0.05
_rQ_ Simplified 0.03 0.04 0.04 0.04
Detailed 0.02 0.02 0.04 0.03
None 0.03 0.02 0.05 0.03
_rK_ Simplified 0.04 0.04 0.03 0.03
Detailed 0.03 0.02 0.02 0.03
None 0.05 0.05 0.07 0.05
_rV_ Simplified 0.04 0.07 0.07 0.06
Detailed 0.03 0.05 0.04 0.04
None 0.03 0.06 0.07 0.05
_rO_ Simplified 0.02 0.04 0.06 0.04
Detailed 0.01 0.03 0.03 0.02
##### Table 28: Statistical results for StrategyQA using gemma-2-2b-it on correct responses.
#### reasoning tasks: AQuA (Figure 168, Table 36), GSM8K (Figure 169, Table 37), StrategyQA (Fig- ure 170, Table 38), ECQA (Figure 171, Table 39), CREAK (Figure 172, Table 40), Sensemaking (Fig- ure 173, Table 41).
B.4.2 Wiki Tasks The visualizations and statistical results on Wiki tasks are shown in Figure 174 and Table 42.
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 3.94 5.18 12.30 6.55 sQ Simplified 1.18 1.42 2.39 1.68 Detailed 0.43 0.74 0.94 0.77 None 3.77 8.15 19.98 8.94 sK Simplified 1.18 1.94 2.05 1.82 Detailed 0.39 0.80 0.71 0.70 None 16.91 24.85 8.20 19.14 sV Simplified 4.70 5.12 1.32 4.20 Detailed 1.65 2.11 0.87 1.73 None 12.87 11.21 5.43 10.38 AQuA sO Simplified 3.52 3.00 1.04 2.72 Detailed 1.25 1.36 0.48 1.14 None 0.03 0.05 0.10 0.05 rQ Simplified 0.02 0.03 0.08 0.04 Detailed 0.02 0.04 0.09 0.05 None 0.04 0.03 0.04 0.04 rK Simplified 0.04 0.02 0.04 0.03 Detailed 0.03 0.04 0.06 0.04 None 0.02 0.05 0.16 0.07 rV Simplified 0.02 0.05 0.06 0.04 Detailed 0.02 0.05 0.04 0.04 None 0.01 0.05 0.08 0.04 rO Simplified 0.01 0.04 0.04 0.03 Detailed 0.01 0.03 0.04 0.02||||
||sK|None Simplified Detailed|3.77 8.15 19.98 8.94 1.18 1.94 2.05 1.82 0.39 0.80 0.71 0.70|
||sV|None Simplified Detailed|16.91 24.85 8.20 19.14 4.70 5.12 1.32 4.20 1.65 2.11 0.87 1.73|
||sO|None Simplified Detailed|12.87 11.21 5.43 10.38 3.52 3.00 1.04 2.72 1.25 1.36 0.48 1.14|
||rQ|None Simplified Detailed|0.03 0.05 0.10 0.05 0.02 0.03 0.08 0.04 0.02 0.04 0.09 0.05|
||rK|None Simplified Detailed|0.04 0.03 0.04 0.04 0.04 0.02 0.04 0.03 0.03 0.04 0.06 0.04|
||rV|None Simplified Detailed|0.02 0.05 0.16 0.07 0.02 0.05 0.06 0.04 0.02 0.05 0.04 0.04|
||rO|None Simplified Detailed|0.01 0.05 0.08 0.04 0.01 0.04 0.04 0.03 0.01 0.03 0.04 0.02|
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 11.01 9.56 4.80 9.15 sQ Simplified 1.47 1.48 2.23 1.69 Detailed 0.44 0.78 0.72 0.68 None 8.98 10.89 9.19 10.78 sK Simplified 1.00 2.82 1.73 2.00 Detailed 0.36 0.64 0.44 0.52 None 48.02 63.95 29.91 52.33 sV Simplified 5.55 6.20 1.57 5.05 Detailed 2.19 2.25 0.44 1.84 None 33.59 22.10 11.38 23.64 StrategyQA sO Simplified 4.11 2.94 1.62 2.99 Detailed 1.68 1.59 0.78 1.42 None 0.02 0.06 0.07 0.05 rQ Simplified 0.03 0.04 0.04 0.04 Detailed 0.02 0.02 0.04 0.03 None 0.03 0.02 0.05 0.03 rK Simplified 0.04 0.04 0.03 0.03 Detailed 0.03 0.02 0.02 0.03 None 0.05 0.05 0.07 0.05 rV Simplified 0.04 0.07 0.07 0.06 Detailed 0.03 0.05 0.04 0.04 None 0.03 0.06 0.07 0.05 rO Simplified 0.02 0.04 0.06 0.04 Detailed 0.01 0.03 0.03 0.02|sQ|None Simplified Detailed|11.01 9.56 4.80 9.15 1.47 1.48 2.23 1.69 0.44 0.78 0.72 0.68|
||sK|None Simplified Detailed|8.98 10.89 9.19 10.78 1.00 2.82 1.73 2.00 0.36 0.64 0.44 0.52|
||sV|None Simplified Detailed|48.02 63.95 29.91 52.33 5.55 6.20 1.57 5.05 2.19 2.25 0.44 1.84|
||sO|None Simplified Detailed|33.59 22.10 11.38 23.64 4.11 2.94 1.62 2.99 1.68 1.59 0.78 1.42|
||rQ|None Simplified Detailed|0.02 0.06 0.07 0.05 0.03 0.04 0.04 0.04 0.02 0.02 0.04 0.03|
||rK|None Simplified Detailed|0.03 0.02 0.05 0.03 0.04 0.04 0.03 0.03 0.03 0.02 0.02 0.03|
||rV|None Simplified Detailed|0.05 0.05 0.07 0.05 0.04 0.07 0.07 0.06 0.03 0.05 0.04 0.04|
||rO|None Simplified Detailed|0.03 0.06 0.07 0.05 0.02 0.04 0.06 0.04 0.01 0.03 0.03 0.02|
-----
##### Figure 38: Visualization for AQuA using gemma-2-2b-it on correct responses.
Figure 39: Visualization for GSM8K using gemma-2-2b-it on correct responses.
-----
##### Figure 40: Visualization for StrategyQA using gemma-2-2b-it on correct responses.
Figure 41: Visualization for ECQA using gemma-2-2b-it on correct responses.
-----
##### Figure 42: Visualization for CREAK using gemma-2-2b-it on correct responses.
Figure 43: Visualization for Sensemaking using gemma-2-2b-it on correct responses.
-----
##### Figure 44: Visualization for Wiki tasks using gemma-2-2b-it on correct responses.
-----
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 6.07 12.93 10.41 10.52 sQ Simplified 2.03 2.84 2.93 2.81 Detailed 0.51 0.99 1.00 0.92 None 6.14 17.32 13.03 12.55 sK Simplified 1.53 5.10 3.22 3.56 Detailed 0.43 0.97 0.67 0.78 None 34.29 44.41 9.37 35.20 sV Simplified 9.30 9.10 3.85 8.18 Detailed 2.29 2.79 0.84 2.24 None 24.30 19.65 3.97 18.15 Sensemaking sO Simplified 6.91 5.03 1.18 4.86 Detailed 1.68 2.06 0.58 1.63 None 0.02 0.08 0.07 0.06 rQ Simplified 0.02 0.04 0.08 0.05 Detailed 0.02 0.04 0.07 0.05 None 0.04 0.02 0.05 0.03 rK Simplified 0.03 0.04 0.05 0.04 Detailed 0.04 0.04 0.05 0.04 None 0.02 0.05 0.07 0.05 rV Simplified 0.03 0.05 0.05 0.05 Detailed 0.02 0.05 0.03 0.04 None 0.02 0.08 0.08 0.06 rO Simplified 0.02 0.04 0.04 0.03 Detailed 0.01 0.03 0.03 0.02|sQ|None Simplified Detailed|6.07 12.93 10.41 10.52 2.03 2.84 2.93 2.81 0.51 0.99 1.00 0.92|
||sK|None Simplified Detailed|6.14 17.32 13.03 12.55 1.53 5.10 3.22 3.56 0.43 0.97 0.67 0.78|
||sV|None Simplified Detailed|34.29 44.41 9.37 35.20 9.30 9.10 3.85 8.18 2.29 2.79 0.84 2.24|
||sO|None Simplified Detailed|24.30 19.65 3.97 18.15 6.91 5.03 1.18 4.86 1.68 2.06 0.58 1.63|
||rQ|None Simplified Detailed|0.02 0.08 0.07 0.06 0.02 0.04 0.08 0.05 0.02 0.04 0.07 0.05|
||rK|None Simplified Detailed|0.04 0.02 0.05 0.03 0.03 0.04 0.05 0.04 0.04 0.04 0.05 0.04|
||rV|None Simplified Detailed|0.02 0.05 0.07 0.05 0.03 0.05 0.05 0.05 0.02 0.05 0.03 0.04|
||rO|None Simplified Detailed|0.02 0.08 0.08 0.06 0.02 0.04 0.04 0.03 0.01 0.03 0.03 0.02|
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 8.08 13.75 11.11 11.72 sQ Simplified 1.85 3.70 2.91 3.27 Detailed 0.55 0.97 0.92 0.90 None 6.91 19.80 10.42 13.94 sK Simplified 1.39 5.91 1.71 3.69 Detailed 0.47 1.07 0.59 0.81 None 39.01 47.62 22.02 39.91 sV Simplified 8.46 9.71 2.91 7.90 Detailed 2.83 3.04 0.90 2.53 None 29.11 19.93 5.20 19.57 ECQA sO Simplified 6.28 5.20 1.27 4.71 Detailed 2.10 2.22 0.70 1.85 None 0.02 0.08 0.10 0.07 rQ Simplified 0.02 0.05 0.07 0.05 Detailed 0.02 0.04 0.07 0.05 None 0.03 0.03 0.07 0.04 rK Simplified 0.03 0.05 0.05 0.05 Detailed 0.03 0.04 0.05 0.04 None 0.03 0.06 0.07 0.05 rV Simplified 0.03 0.06 0.05 0.05 Detailed 0.02 0.05 0.03 0.04 None 0.02 0.07 0.11 0.06 rO Simplified 0.02 0.05 0.04 0.04 Detailed 0.01 0.03 0.03 0.02|sQ|None Simplified Detailed|8.08 13.75 11.11 11.72 1.85 3.70 2.91 3.27 0.55 0.97 0.92 0.90|
||sK|None Simplified Detailed|6.91 19.80 10.42 13.94 1.39 5.91 1.71 3.69 0.47 1.07 0.59 0.81|
||sV|None Simplified Detailed|39.01 47.62 22.02 39.91 8.46 9.71 2.91 7.90 2.83 3.04 0.90 2.53|
||sO|None Simplified Detailed|29.11 19.93 5.20 19.57 6.28 5.20 1.27 4.71 2.10 2.22 0.70 1.85|
||rQ|None Simplified Detailed|0.02 0.08 0.10 0.07 0.02 0.05 0.07 0.05 0.02 0.04 0.07 0.05|
||rK|None Simplified Detailed|0.03 0.03 0.07 0.04 0.03 0.05 0.05 0.05 0.03 0.04 0.05 0.04|
||rV|None Simplified Detailed|0.03 0.06 0.07 0.05 0.03 0.06 0.05 0.05 0.02 0.05 0.03 0.04|
||rO|None Simplified Detailed|0.02 0.07 0.11 0.06 0.02 0.05 0.04 0.04 0.01 0.03 0.03 0.02|
##### Table 29: Statistical results for ECQA using gemma-2- 2b-it on correct responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
None 19.40 21.48 18.64 20.16
_sQ_ Simplified 2.17 3.38 3.35 3.24
Detailed 0.52 0.96 1.08 0.92
None 15.51 28.99 28.47 26.59
_sK_ Simplified 1.52 5.41 2.04 3.46
Detailed 0.42 0.96 0.70 0.77
None 97.03 109.60 26.16 90.55
_sV_ Simplified 9.55 10.54 3.15 8.84
Detailed 2.64 2.84 0.70 2.34
None 67.77 37.98 9.63 41.16
CREAK _sO_ Simplified 7.08 4.32 1.76 4.69
Detailed 1.96 1.90 0.81 1.67
None 0.02 0.07 0.09 0.07
_rQ_ Simplified 0.02 0.05 0.09 0.05
Detailed 0.02 0.03 0.06 0.04
None 0.03 0.04 0.08 0.04
_rK_ Simplified 0.03 0.04 0.06 0.04
Detailed 0.03 0.03 0.04 0.03
None 0.03 0.05 0.07 0.05
_rV_ Simplified 0.04 0.06 0.05 0.05
Detailed 0.03 0.05 0.04 0.04
None 0.02 0.06 0.08 0.05
_rO_ Simplified 0.02 0.04 0.04 0.03
Detailed 0.01 0.03 0.03 0.02
##### Table 30: Statistical results for CREAK using gemma- 2-2b-it on correct responses.
##### Table 31: Statistical results for Sensemaking using gemma-2-2b-it on correct responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
Len 100 0.96 0.89 1.41 1.03
_sQ_ Len 500 0.60 0.50 0.78 0.58
Len 1000 0.55 0.46 0.64 0.52
Unpopular 1.58 1.09 1.08 1.19
Len 100 0.71 1.39 1.11 1.10
_sK_ Len 500 0.42 0.78 0.66 0.65
Len 1000 0.34 0.57 0.53 0.50
Unpopular 1.52 2.14 0.79 1.58
Len 100 3.44 3.72 1.28 3.09
_sV_ Len 500 2.59 2.52 1.07 2.20
Len 1000 2.20 2.06 0.86 1.82
Unpopular 4.56 5.46 1.83 4.42
Len 100 2.80 2.03 2.23 2.28
_sO_ Len 500 2.09 1.58 1.57 1.69
Wiki Len 1000 1.85 1.37 1.30 1.46
Unpopular 3.18 2.27 2.28 2.46
Len 100 0.03 0.03 0.03 0.03
_rQ_ Len 500 0.02 0.03 0.03 0.03
Len 1000 0.02 0.03 0.03 0.03
Unpopular 0.03 0.03 0.02 0.03
Len 100 0.04 0.03 0.03 0.03
_rK_ Len 500 0.03 0.04 0.03 0.03
Len 1000 0.04 0.04 0.03 0.03
Unpopular 0.03 0.03 0.03 0.03
Len 100 0.04 0.07 0.09 0.06
_rV_ Len 500 0.03 0.05 0.06 0.05
Len 1000 0.03 0.05 0.05 0.04
Unpopular 0.05 0.08 0.08 0.07
Len 100 0.02 0.04 0.07 0.04
_rO_ Len 500 0.01 0.03 0.04 0.03
Len 1000 0.01 0.02 0.03 0.02
Unpopular 0.02 0.03 0.06 0.03
##### Table 32: Statistical results for Wiki using gemma-2-2b- it on correct responses.
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 19.40 21.48 18.64 20.16 sQ Simplified 2.17 3.38 3.35 3.24 Detailed 0.52 0.96 1.08 0.92 None 15.51 28.99 28.47 26.59 sK Simplified 1.52 5.41 2.04 3.46 Detailed 0.42 0.96 0.70 0.77 None 97.03 109.60 26.16 90.55 sV Simplified 9.55 10.54 3.15 8.84 Detailed 2.64 2.84 0.70 2.34 None 67.77 37.98 9.63 41.16 CREAK sO Simplified 7.08 4.32 1.76 4.69 Detailed 1.96 1.90 0.81 1.67 None 0.02 0.07 0.09 0.07 rQ Simplified 0.02 0.05 0.09 0.05 Detailed 0.02 0.03 0.06 0.04 None 0.03 0.04 0.08 0.04 rK Simplified 0.03 0.04 0.06 0.04 Detailed 0.03 0.03 0.04 0.03 None 0.03 0.05 0.07 0.05 rV Simplified 0.04 0.06 0.05 0.05 Detailed 0.03 0.05 0.04 0.04 None 0.02 0.06 0.08 0.05 rO Simplified 0.02 0.04 0.04 0.03 Detailed 0.01 0.03 0.03 0.02|sQ|None Simplified Detailed|19.40 21.48 18.64 20.16 2.17 3.38 3.35 3.24 0.52 0.96 1.08 0.92|
||sK|None Simplified Detailed|15.51 28.99 28.47 26.59 1.52 5.41 2.04 3.46 0.42 0.96 0.70 0.77|
||sV|None Simplified Detailed|97.03 109.60 26.16 90.55 9.55 10.54 3.15 8.84 2.64 2.84 0.70 2.34|
||sO|None Simplified Detailed|67.77 37.98 9.63 41.16 7.08 4.32 1.76 4.69 1.96 1.90 0.81 1.67|
||rQ|None Simplified Detailed|0.02 0.07 0.09 0.07 0.02 0.05 0.09 0.05 0.02 0.03 0.06 0.04|
||rK|None Simplified Detailed|0.03 0.04 0.08 0.04 0.03 0.04 0.06 0.04 0.03 0.03 0.04 0.03|
||rV|None Simplified Detailed|0.03 0.05 0.07 0.05 0.04 0.06 0.05 0.05 0.03 0.05 0.04 0.04|
||rO|None Simplified Detailed|0.02 0.06 0.08 0.05 0.02 0.04 0.04 0.03 0.01 0.03 0.03 0.02|
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|Len 100 0.96 0.89 1.41 1.03 sQ Len 500 0.60 0.50 0.78 0.58 Len 1000 0.55 0.46 0.64 0.52 Unpopular 1.58 1.09 1.08 1.19 Len 100 0.71 1.39 1.11 1.10 sK Len 500 0.42 0.78 0.66 0.65 Len 1000 0.34 0.57 0.53 0.50 Unpopular 1.52 2.14 0.79 1.58 Len 100 3.44 3.72 1.28 3.09 sV Len 500 2.59 2.52 1.07 2.20 Len 1000 2.20 2.06 0.86 1.82 Unpopular 4.56 5.46 1.83 4.42 Len 100 2.80 2.03 2.23 2.28 sO Len 500 2.09 1.58 1.57 1.69 Wiki Len 1000 1.85 1.37 1.30 1.46 Unpopular 3.18 2.27 2.28 2.46 Len 100 0.03 0.03 0.03 0.03 rQ Len 500 0.02 0.03 0.03 0.03 Len 1000 0.02 0.03 0.03 0.03 Unpopular 0.03 0.03 0.02 0.03 Len 100 0.04 0.03 0.03 0.03 rK Len 500 0.03 0.04 0.03 0.03 Len 1000 0.04 0.04 0.03 0.03 Unpopular 0.03 0.03 0.03 0.03 Len 100 0.04 0.07 0.09 0.06 rV Len 500 0.03 0.05 0.06 0.05 Len 1000 0.03 0.05 0.05 0.04 Unpopular 0.05 0.08 0.08 0.07 Len 100 0.02 0.04 0.07 0.04 rO Len 500 0.01 0.03 0.04 0.03 Len 1000 0.01 0.02 0.03 0.02 Unpopular 0.02 0.03 0.06 0.03|sQ|Len 100 Len 500 Len 1000 Unpopular|0.96 0.89 1.41 1.03 0.60 0.50 0.78 0.58 0.55 0.46 0.64 0.52 1.58 1.09 1.08 1.19|
||sK|Len 100 Len 500 Len 1000 Unpopular|0.71 1.39 1.11 1.10 0.42 0.78 0.66 0.65 0.34 0.57 0.53 0.50 1.52 2.14 0.79 1.58|
||sV|Len 100 Len 500 Len 1000 Unpopular|3.44 3.72 1.28 3.09 2.59 2.52 1.07 2.20 2.20 2.06 0.86 1.82 4.56 5.46 1.83 4.42|
||sO|Len 100 Len 500 Len 1000 Unpopular|2.80 2.03 2.23 2.28 2.09 1.58 1.57 1.69 1.85 1.37 1.30 1.46 3.18 2.27 2.28 2.46|
||rQ|Len 100 Len 500 Len 1000 Unpopular|0.03 0.03 0.03 0.03 0.02 0.03 0.03 0.03 0.02 0.03 0.03 0.03 0.03 0.03 0.02 0.03|
||rK|Len 100 Len 500 Len 1000 Unpopular|0.04 0.03 0.03 0.03 0.03 0.04 0.03 0.03 0.04 0.04 0.03 0.03 0.03 0.03 0.03 0.03|
||rV|Len 100 Len 500 Len 1000 Unpopular|0.04 0.07 0.09 0.06 0.03 0.05 0.06 0.05 0.03 0.05 0.05 0.04 0.05 0.08 0.08 0.07|
||rO|Len 100 Len 500 Len 1000 Unpopular|0.02 0.04 0.07 0.04 0.01 0.03 0.04 0.03 0.01 0.02 0.03 0.02 0.02 0.03 0.06 0.03|
-----
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|Simplified 0.63 0.75 1.32 0.89 sQ Detailed 0.47 0.47 0.83 0.59 Simplified 0.58 0.70 1.03 0.75 sK Detailed 0.44 0.48 0.63 0.52 Simplified 3.42 3.02 1.28 2.83 sV Detailed 2.10 1.81 0.84 1.72 Algebra Simplified 2.76 1.89 1.30 2.02 sO Detailed 1.55 1.24 0.83 1.25 Simplified 0.02 0.03 0.09 0.04 rQ Detailed 0.03 0.03 0.09 0.05 Simplified 0.03 0.03 0.08 0.04 rK Detailed 0.04 0.03 0.08 0.04 Simplified 0.03 0.05 0.06 0.05 rV Detailed 0.02 0.04 0.05 0.04 Simplified 0.01 0.02 0.04 0.02 rO Detailed 0.01 0.01 0.03 0.02|sQ|Simplified Detailed|0.63 0.75 1.32 0.89 0.47 0.47 0.83 0.59|
||sK|Simplified Detailed|0.58 0.70 1.03 0.75 0.44 0.48 0.63 0.52|
||sV|Simplified Detailed|3.42 3.02 1.28 2.83 2.10 1.81 0.84 1.72|
||sO|Simplified Detailed|2.76 1.89 1.30 2.02 1.55 1.24 0.83 1.25|
||rQ|Simplified Detailed|0.02 0.03 0.09 0.04 0.03 0.03 0.09 0.05|
||rK|Simplified Detailed|0.03 0.03 0.08 0.04 0.04 0.03 0.08 0.04|
||rV|Simplified Detailed|0.03 0.05 0.06 0.05 0.02 0.04 0.05 0.04|
||rO|Simplified Detailed|0.01 0.02 0.04 0.02 0.01 0.01 0.03 0.02|
##### Table 33: Statistical results for MATH-Algebra using gemma-2-2b-it on irrelevant responses.
Figure 45: Visualization for MATH-Algebra using gemma-2-2b-it on irrelevant responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
Simplified 0.63 0.64 1.22 0.80
_sQ_ Detailed 0.56 0.49 0.80 0.62
Simplified 0.58 0.69 0.99 0.74
_sK_ Detailed 0.50 0.48 0.63 0.54
Simplified 3.47 3.00 1.20 2.76
_sV_ Detailed 2.47 1.78 0.80 1.76
Counting Simplified 2.87 1.78 1.22 1.94
_sO_ Detailed 1.79 1.22 0.82 1.28
Simplified 0.03 0.03 0.08 0.04
_rQ_ Detailed 0.04 0.03 0.09 0.05
Simplified 0.04 0.03 0.08 0.04
_rK_ Detailed 0.05 0.03 0.08 0.05
Simplified 0.03 0.05 0.06 0.04
_rV_ Detailed 0.02 0.04 0.05 0.04
Simplified 0.01 0.02 0.03 0.02
_rO_ Detailed 0.01 0.02 0.03 0.02
##### Table 34: Statistical results for MATH-Counting using gemma-2-2b-it on irrelevant responses.
##### Figure 46: Visualization for MATH-Counting using gemma-2-2b-it on irrelevant responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
Simplified 0.60 0.65 1.36 0.84
_sQ_ Detailed 0.57 0.44 1.23 0.70
Simplified 0.50 0.51 1.07 0.67
_sK_ Detailed 0.51 0.55 1.26 0.75
Simplified 2.67 2.31 0.81 2.12
_sV_ Detailed 2.60 2.25 1.13 2.06
Geometry Simplified 2.22 1.53 0.98 1.60
_sO_ Detailed 1.81 1.34 1.13 1.40
Simplified 0.02 0.02 0.08 0.04
_rQ_ Detailed 0.04 0.03 0.10 0.05
Simplified 0.03 0.03 0.08 0.04
_rK_ Detailed 0.05 0.03 0.10 0.05
Simplified 0.03 0.04 0.06 0.04
_rV_ Detailed 0.02 0.05 0.05 0.04
Simplified 0.01 0.02 0.03 0.02
_rO_ Detailed 0.01 0.02 0.04 0.02
##### Table 35: Statistical results for MATH-Geometry using gemma-2-2b-it on irrelevant responses.
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|Simplified 0.60 0.65 1.36 0.84 sQ Detailed 0.57 0.44 1.23 0.70 Simplified 0.50 0.51 1.07 0.67 sK Detailed 0.51 0.55 1.26 0.75 Simplified 2.67 2.31 0.81 2.12 sV Detailed 2.60 2.25 1.13 2.06 Geometry Simplified 2.22 1.53 0.98 1.60 sO Detailed 1.81 1.34 1.13 1.40 Simplified 0.02 0.02 0.08 0.04 rQ Detailed 0.04 0.03 0.10 0.05 Simplified 0.03 0.03 0.08 0.04 rK Detailed 0.05 0.03 0.10 0.05 Simplified 0.03 0.04 0.06 0.04 rV Detailed 0.02 0.05 0.05 0.04 Simplified 0.01 0.02 0.03 0.02 rO Detailed 0.01 0.02 0.04 0.02|sQ|Simplified Detailed|0.60 0.65 1.36 0.84 0.57 0.44 1.23 0.70|
||sK|Simplified Detailed|0.50 0.51 1.07 0.67 0.51 0.55 1.26 0.75|
||sO|Simplified Detailed|2.22 1.53 0.98 1.60 1.81 1.34 1.13 1.40|
||rQ|Simplified Detailed|0.02 0.02 0.08 0.04 0.04 0.03 0.10 0.05|
||rK|Simplified Detailed|0.03 0.03 0.08 0.04 0.05 0.03 0.10 0.05|
||rV|Simplified Detailed|0.03 0.04 0.06 0.04 0.02 0.05 0.05 0.04|
||rO|Simplified Detailed|0.01 0.02 0.03 0.02 0.01 0.02 0.04 0.02|
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|Simplified 0.63 0.64 1.22 0.80 sQ Detailed 0.56 0.49 0.80 0.62 Simplified 0.58 0.69 0.99 0.74 sK Detailed 0.50 0.48 0.63 0.54 Simplified 3.47 3.00 1.20 2.76 sV Detailed 2.47 1.78 0.80 1.76 Counting Simplified 2.87 1.78 1.22 1.94 sO Detailed 1.79 1.22 0.82 1.28 Simplified 0.03 0.03 0.08 0.04 rQ Detailed 0.04 0.03 0.09 0.05 Simplified 0.04 0.03 0.08 0.04 rK Detailed 0.05 0.03 0.08 0.05 Simplified 0.03 0.05 0.06 0.04 rV Detailed 0.02 0.04 0.05 0.04 Simplified 0.01 0.02 0.03 0.02 rO Detailed 0.01 0.02 0.03 0.02|sQ|Simplified Detailed|0.63 0.64 1.22 0.80 0.56 0.49 0.80 0.62|
||sK|Simplified Detailed|0.58 0.69 0.99 0.74 0.50 0.48 0.63 0.54|
||sV|Simplified Detailed|3.47 3.00 1.20 2.76 2.47 1.78 0.80 1.76|
||sO|Simplified Detailed|2.87 1.78 1.22 1.94 1.79 1.22 0.82 1.28|
||rQ|Simplified Detailed|0.03 0.03 0.08 0.04 0.04 0.03 0.09 0.05|
||rK|Simplified Detailed|0.04 0.03 0.08 0.04 0.05 0.03 0.08 0.05|
||rV|Simplified Detailed|0.03 0.05 0.06 0.04 0.02 0.04 0.05 0.04|
||rO|Simplified Detailed|0.01 0.02 0.03 0.02 0.01 0.02 0.03 0.02|
-----
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 4.30 6.74 11.48 7.24 sQ Simplified 0.96 0.88 1.83 1.17 Detailed 0.70 0.61 1.22 0.84 None 3.94 10.22 15.59 9.07 sK Simplified 0.95 1.10 1.60 1.19 Detailed 0.62 0.88 0.73 0.81 None 24.31 32.27 13.78 25.89 sV Simplified 4.61 3.78 1.43 3.47 Detailed 3.06 2.26 0.98 2.20 None 18.70 13.57 10.20 14.13 GSM8K sO Simplified 3.54 2.31 1.04 2.34 Detailed 2.19 1.60 0.80 1.58 None 0.03 0.04 0.06 0.04 rQ Simplified 0.03 0.02 0.07 0.04 Detailed 0.04 0.03 0.10 0.06 None 0.04 0.04 0.05 0.04 rK Simplified 0.05 0.02 0.06 0.04 Detailed 0.05 0.03 0.07 0.05 None 0.03 0.06 0.10 0.06 rV Simplified 0.04 0.06 0.06 0.05 Detailed 0.03 0.05 0.04 0.04 None 0.02 0.06 0.07 0.05 rO Simplified 0.01 0.03 0.04 0.03 Detailed 0.01 0.02 0.03 0.02|sQ|None Simplified Detailed|4.30 6.74 11.48 7.24 0.96 0.88 1.83 1.17 0.70 0.61 1.22 0.84|
||sK|None Simplified Detailed|3.94 10.22 15.59 9.07 0.95 1.10 1.60 1.19 0.62 0.88 0.73 0.81|
||sV|None Simplified Detailed|24.31 32.27 13.78 25.89 4.61 3.78 1.43 3.47 3.06 2.26 0.98 2.20|
||sO|None Simplified Detailed|18.70 13.57 10.20 14.13 3.54 2.31 1.04 2.34 2.19 1.60 0.80 1.58|
||rQ|None Simplified Detailed|0.03 0.04 0.06 0.04 0.03 0.02 0.07 0.04 0.04 0.03 0.10 0.06|
||rK|None Simplified Detailed|0.04 0.04 0.05 0.04 0.05 0.02 0.06 0.04 0.05 0.03 0.07 0.05|
||rV|None Simplified Detailed|0.03 0.06 0.10 0.06 0.04 0.06 0.06 0.05 0.03 0.05 0.04 0.04|
||rO|None Simplified Detailed|0.02 0.06 0.07 0.05 0.01 0.03 0.04 0.03 0.01 0.02 0.03 0.02|
##### Figure 47: Visualization for MATH-Geometry using gemma-2-2b-it on irrelevant responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
None 3.97 5.15 12.37 6.55
_sQ_ Simplified 1.22 1.15 2.35 1.49
Detailed 0.69 0.58 1.15 0.80
None 3.84 8.17 20.16 9.03
_sK_ Simplified 1.23 1.49 2.02 1.56
Detailed 0.62 0.86 0.76 0.81
None 17.02 24.77 8.19 19.12
_sV_ Simplified 5.30 5.01 1.99 4.45
Detailed 3.11 2.46 1.02 2.34
None 12.88 11.19 5.43 10.37
AQuA _sO_ Simplified 4.15 2.75 1.61 2.87
Detailed 2.21 1.55 1.03 1.62
None 0.03 0.05 0.10 0.05
_rQ_ Simplified 0.03 0.02 0.08 0.04
Detailed 0.04 0.03 0.09 0.05
None 0.04 0.03 0.04 0.04
_rK_ Simplified 0.04 0.02 0.06 0.04
Detailed 0.05 0.03 0.07 0.05
None 0.02 0.05 0.16 0.07
_rV_ Simplified 0.03 0.05 0.06 0.05
Detailed 0.03 0.05 0.04 0.04
None 0.01 0.05 0.08 0.04
_rO_ Simplified 0.01 0.03 0.04 0.02
Detailed 0.01 0.02 0.03 0.02
##### Table 36: Statistical results for AQuA using gemma-2- 2b-it on irrelevant responses.
##### Table 37: Statistical results for GSM8K using gemma- 2-2b-it on irrelevant responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
None 11.79 10.10 4.95 9.63
_sQ_ Simplified 2.20 0.96 1.81 1.51
Detailed 1.26 0.44 1.09 0.79
None 9.75 12.02 9.80 11.68
_sK_ Simplified 1.74 1.62 1.69 1.61
Detailed 0.97 0.65 0.97 0.79
None 50.93 67.72 30.54 55.22
_sV_ Simplified 6.11 4.74 3.01 4.66
Detailed 3.78 2.51 1.26 2.54
None 35.52 23.60 11.71 25.04
StrategyQA _sO_ Simplified 4.47 2.37 2.34 2.99
Detailed 2.71 1.70 1.47 1.93
None 0.02 0.06 0.07 0.05
_rQ_ Simplified 0.04 0.03 0.04 0.03
Detailed 0.04 0.03 0.06 0.04
None 0.03 0.02 0.05 0.03
_rK_ Simplified 0.04 0.03 0.04 0.03
Detailed 0.05 0.03 0.06 0.04
None 0.05 0.05 0.07 0.05
_rV_ Simplified 0.05 0.07 0.08 0.06
Detailed 0.03 0.05 0.04 0.04
None 0.03 0.06 0.07 0.05
_rO_ Simplified 0.02 0.03 0.05 0.03
Detailed 0.02 0.02 0.02 0.02
##### Table 38: Statistical results for StrategyQA using gemma-2-2b-it on irrelevant responses.
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 3.97 5.15 12.37 6.55 sQ Simplified 1.22 1.15 2.35 1.49 Detailed 0.69 0.58 1.15 0.80 None 3.84 8.17 20.16 9.03 sK Simplified 1.23 1.49 2.02 1.56 Detailed 0.62 0.86 0.76 0.81 None 17.02 24.77 8.19 19.12 sV Simplified 5.30 5.01 1.99 4.45 Detailed 3.11 2.46 1.02 2.34 None 12.88 11.19 5.43 10.37 AQuA sO Simplified 4.15 2.75 1.61 2.87 Detailed 2.21 1.55 1.03 1.62 None 0.03 0.05 0.10 0.05 rQ Simplified 0.03 0.02 0.08 0.04 Detailed 0.04 0.03 0.09 0.05 None 0.04 0.03 0.04 0.04 rK Simplified 0.04 0.02 0.06 0.04 Detailed 0.05 0.03 0.07 0.05 None 0.02 0.05 0.16 0.07 rV Simplified 0.03 0.05 0.06 0.05 Detailed 0.03 0.05 0.04 0.04 None 0.01 0.05 0.08 0.04 rO Simplified 0.01 0.03 0.04 0.02 Detailed 0.01 0.02 0.03 0.02|sQ|None Simplified Detailed|3.97 5.15 12.37 6.55 1.22 1.15 2.35 1.49 0.69 0.58 1.15 0.80|
||sK|None Simplified Detailed|3.84 8.17 20.16 9.03 1.23 1.49 2.02 1.56 0.62 0.86 0.76 0.81|
||sV|None Simplified Detailed|17.02 24.77 8.19 19.12 5.30 5.01 1.99 4.45 3.11 2.46 1.02 2.34|
||sO|None Simplified Detailed|12.88 11.19 5.43 10.37 4.15 2.75 1.61 2.87 2.21 1.55 1.03 1.62|
||rQ|None Simplified Detailed|0.03 0.05 0.10 0.05 0.03 0.02 0.08 0.04 0.04 0.03 0.09 0.05|
||rK|None Simplified Detailed|0.04 0.03 0.04 0.04 0.04 0.02 0.06 0.04 0.05 0.03 0.07 0.05|
||rV|None Simplified Detailed|0.02 0.05 0.16 0.07 0.03 0.05 0.06 0.05 0.03 0.05 0.04 0.04|
||rO|None Simplified Detailed|0.01 0.05 0.08 0.04 0.01 0.03 0.04 0.02 0.01 0.02 0.03 0.02|
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 11.79 10.10 4.95 9.63 sQ Simplified 2.20 0.96 1.81 1.51 Detailed 1.26 0.44 1.09 0.79 None 9.75 12.02 9.80 11.68 sK Simplified 1.74 1.62 1.69 1.61 Detailed 0.97 0.65 0.97 0.79 None 50.93 67.72 30.54 55.22 sV Simplified 6.11 4.74 3.01 4.66 Detailed 3.78 2.51 1.26 2.54 None 35.52 23.60 11.71 25.04 StrategyQA sO Simplified 4.47 2.37 2.34 2.99 Detailed 2.71 1.70 1.47 1.93 None 0.02 0.06 0.07 0.05 rQ Simplified 0.04 0.03 0.04 0.03 Detailed 0.04 0.03 0.06 0.04 None 0.03 0.02 0.05 0.03 rK Simplified 0.04 0.03 0.04 0.03 Detailed 0.05 0.03 0.06 0.04 None 0.05 0.05 0.07 0.05 rV Simplified 0.05 0.07 0.08 0.06 Detailed 0.03 0.05 0.04 0.04 None 0.03 0.06 0.07 0.05 rO Simplified 0.02 0.03 0.05 0.03 Detailed 0.02 0.02 0.02 0.02|sQ|None Simplified Detailed|11.79 10.10 4.95 9.63 2.20 0.96 1.81 1.51 1.26 0.44 1.09 0.79|
||sK|None Simplified Detailed|9.75 12.02 9.80 11.68 1.74 1.62 1.69 1.61 0.97 0.65 0.97 0.79|
||sV|None Simplified Detailed|50.93 67.72 30.54 55.22 6.11 4.74 3.01 4.66 3.78 2.51 1.26 2.54|
||sO|None Simplified Detailed|35.52 23.60 11.71 25.04 4.47 2.37 2.34 2.99 2.71 1.70 1.47 1.93|
||rQ|None Simplified Detailed|0.02 0.06 0.07 0.05 0.04 0.03 0.04 0.03 0.04 0.03 0.06 0.04|
||rK|None Simplified Detailed|0.03 0.02 0.05 0.03 0.04 0.03 0.04 0.03 0.05 0.03 0.06 0.04|
||rV|None Simplified Detailed|0.05 0.05 0.07 0.05 0.05 0.07 0.08 0.06 0.03 0.05 0.04 0.04|
||rO|None Simplified Detailed|0.03 0.06 0.07 0.05 0.02 0.03 0.05 0.03 0.02 0.02 0.02 0.02|
-----
##### Figure 48: Visualization for AQuA using gemma-2-2b-it on irrelevant responses.
Figure 49: Visualization for GSM8K using gemma-2-2b-it on irrelevant responses.
-----
##### Figure 50: Visualization for StrategyQA using gemma-2-2b-it on irrelevant responses.
Figure 51: Visualization for ECQA using gemma-2-2b-it on irrelevant responses.
-----
##### Figure 52: Visualization for CREAK using gemma-2-2b-it on irrelevant responses.
Figure 53: Visualization for Sensemaking using gemma-2-2b-it on irrelevant responses.
-----
##### Figure 54: Visualization for Wiki tasks using gemma-2-2b-it on irrelevant responses.
-----
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 7.68 13.57 12.50 11.79 sQ Simplified 2.20 1.91 1.91 2.01 Detailed 0.84 0.90 1.29 1.05 None 7.41 17.48 14.33 13.10 sK Simplified 1.79 3.61 2.98 2.79 Detailed 0.74 1.21 0.88 1.05 None 41.81 46.33 10.97 38.74 sV Simplified 9.20 7.94 5.11 7.75 Detailed 3.45 2.89 1.55 2.74 None 29.03 21.86 3.94 20.63 Sensemaking sO Simplified 6.87 3.71 1.61 4.22 Detailed 2.47 1.96 0.92 1.87 None 0.02 0.06 0.08 0.06 rQ Simplified 0.02 0.03 0.07 0.04 Detailed 0.03 0.04 0.10 0.06 None 0.04 0.02 0.04 0.03 rK Simplified 0.04 0.03 0.06 0.04 Detailed 0.04 0.04 0.07 0.05 None 0.03 0.05 0.06 0.05 rV Simplified 0.03 0.06 0.04 0.05 Detailed 0.02 0.05 0.03 0.04 None 0.02 0.07 0.08 0.06 rO Simplified 0.02 0.04 0.04 0.03 Detailed 0.01 0.02 0.02 0.02|sQ|None Simplified Detailed|7.68 13.57 12.50 11.79 2.20 1.91 1.91 2.01 0.84 0.90 1.29 1.05|
||sK|None Simplified Detailed|7.41 17.48 14.33 13.10 1.79 3.61 2.98 2.79 0.74 1.21 0.88 1.05|
||sV|None Simplified Detailed|41.81 46.33 10.97 38.74 9.20 7.94 5.11 7.75 3.45 2.89 1.55 2.74|
||sO|None Simplified Detailed|29.03 21.86 3.94 20.63 6.87 3.71 1.61 4.22 2.47 1.96 0.92 1.87|
||rQ|None Simplified Detailed|0.02 0.06 0.08 0.06 0.02 0.03 0.07 0.04 0.03 0.04 0.10 0.06|
||rK|None Simplified Detailed|0.04 0.02 0.04 0.03 0.04 0.03 0.06 0.04 0.04 0.04 0.07 0.05|
||rV|None Simplified Detailed|0.03 0.05 0.06 0.05 0.03 0.06 0.04 0.05 0.02 0.05 0.03 0.04|
||rO|None Simplified Detailed|0.02 0.07 0.08 0.06 0.02 0.04 0.04 0.03 0.01 0.02 0.02 0.02|
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 6.22 8.86 12.56 8.62 sQ Simplified 1.84 1.87 2.27 2.01 Detailed 1.07 0.83 1.28 1.03 None 5.62 14.70 12.56 11.71 sK Simplified 1.55 3.28 1.82 2.39 Detailed 0.77 1.06 0.97 0.99 None 32.47 32.21 31.67 31.90 sV Simplified 7.71 6.52 4.89 6.46 Detailed 4.24 2.95 1.67 3.00 None 24.35 13.34 7.35 15.24 ECQA sO Simplified 5.84 3.09 1.52 3.52 Detailed 3.00 1.92 1.12 2.02 None 0.02 0.05 0.09 0.05 rQ Simplified 0.02 0.04 0.08 0.04 Detailed 0.04 0.04 0.09 0.05 None 0.03 0.04 0.06 0.04 rK Simplified 0.04 0.04 0.06 0.04 Detailed 0.05 0.04 0.08 0.05 None 0.04 0.06 0.07 0.05 rV Simplified 0.04 0.07 0.05 0.05 Detailed 0.02 0.05 0.04 0.04 None 0.02 0.06 0.06 0.05 rO Simplified 0.02 0.03 0.05 0.03 Detailed 0.02 0.02 0.02 0.02|sQ|None Simplified Detailed|6.22 8.86 12.56 8.62 1.84 1.87 2.27 2.01 1.07 0.83 1.28 1.03|
||sK|None Simplified Detailed|5.62 14.70 12.56 11.71 1.55 3.28 1.82 2.39 0.77 1.06 0.97 0.99|
||sV|None Simplified Detailed|32.47 32.21 31.67 31.90 7.71 6.52 4.89 6.46 4.24 2.95 1.67 3.00|
||sO|None Simplified Detailed|24.35 13.34 7.35 15.24 5.84 3.09 1.52 3.52 3.00 1.92 1.12 2.02|
||rQ|None Simplified Detailed|0.02 0.05 0.09 0.05 0.02 0.04 0.08 0.04 0.04 0.04 0.09 0.05|
||rK|None Simplified Detailed|0.03 0.04 0.06 0.04 0.04 0.04 0.06 0.04 0.05 0.04 0.08 0.05|
||rV|None Simplified Detailed|0.04 0.06 0.07 0.05 0.04 0.07 0.05 0.05 0.02 0.05 0.04 0.04|
||rO|None Simplified Detailed|0.02 0.06 0.06 0.05 0.02 0.03 0.05 0.03 0.02 0.02 0.02 0.02|
##### Table 39: Statistical results for ECQA using gemma-2- 2b-it on irrelevant responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
None 20.78 22.32 16.61 20.27
_sQ_ Simplified 2.42 2.21 2.19 2.30
Detailed 1.10 0.74 1.49 1.03
None 16.37 29.50 25.20 26.23
_sK_ Simplified 1.90 3.55 2.29 2.70
Detailed 0.76 1.04 1.20 1.02
None 99.45 112.28 26.10 92.60
_sV_ Simplified 8.94 8.00 4.34 7.52
Detailed 3.94 3.03 1.37 2.88
None 70.14 39.66 10.86 43.13
CREAK _sO_ Simplified 6.66 3.32 1.95 4.05
Detailed 2.83 1.93 1.32 2.03
None 0.02 0.07 0.09 0.07
_rQ_ Simplified 0.03 0.04 0.08 0.05
Detailed 0.03 0.03 0.07 0.04
None 0.03 0.03 0.07 0.04
_rK_ Simplified 0.04 0.04 0.06 0.04
Detailed 0.04 0.04 0.06 0.04
None 0.03 0.05 0.07 0.05
_rV_ Simplified 0.04 0.06 0.06 0.05
Detailed 0.02 0.05 0.04 0.04
None 0.02 0.06 0.08 0.05
_rO_ Simplified 0.02 0.04 0.04 0.03
Detailed 0.02 0.02 0.02 0.02
##### Table 40: Statistical results for CREAK using gemma- 2-2b-it on irrelevant responses.
##### Table 41: Statistical results for Sensemaking using gemma-2-2b-it on irrelevant responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
Len 100 1.97 0.93 1.44 1.32
_sQ_ Len 500 1.36 0.52 0.91 0.81
Len 1000 1.15 0.43 0.78 0.68
Unpopular 1.96 1.04 1.41 1.33
Len 100 1.63 1.36 1.33 1.38
_sK_ Len 500 0.99 0.72 0.79 0.76
Len 1000 0.80 0.55 0.67 0.60
Unpopular 1.56 1.84 1.29 1.54
Len 100 4.96 3.75 1.90 3.62
_sV_ Len 500 3.73 2.66 1.56 2.64
Len 1000 3.17 2.29 1.43 2.29
Unpopular 5.69 4.80 2.96 4.53
Len 100 3.77 2.09 2.69 2.68
_sO_ Len 500 2.84 1.68 2.23 2.10
Wiki Len 1000 2.57 1.46 2.00 1.87
Unpopular 4.17 2.31 3.15 2.93
Len 100 0.04 0.03 0.03 0.03
_rQ_ Len 500 0.04 0.03 0.04 0.03
Len 1000 0.03 0.02 0.04 0.03
Unpopular 0.02 0.03 0.03 0.03
Len 100 0.05 0.02 0.04 0.03
_rK_ Len 500 0.05 0.03 0.04 0.03
Len 1000 0.05 0.03 0.05 0.04
Unpopular 0.03 0.03 0.04 0.03
Len 100 0.05 0.07 0.08 0.06
_rV_ Len 500 0.03 0.05 0.06 0.05
Len 1000 0.03 0.05 0.06 0.04
Unpopular 0.05 0.07 0.09 0.06
Len 100 0.02 0.02 0.06 0.03
_rO_ Len 500 0.02 0.02 0.03 0.02
Len 1000 0.02 0.02 0.03 0.02
Unpopular 0.03 0.03 0.05 0.03
##### Table 42: Statistical results for Wiki using gemma-2-2b- it on irrelevant responses.
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 20.78 22.32 16.61 20.27 sQ Simplified 2.42 2.21 2.19 2.30 Detailed 1.10 0.74 1.49 1.03 None 16.37 29.50 25.20 26.23 sK Simplified 1.90 3.55 2.29 2.70 Detailed 0.76 1.04 1.20 1.02 None 99.45 112.28 26.10 92.60 sV Simplified 8.94 8.00 4.34 7.52 Detailed 3.94 3.03 1.37 2.88 None 70.14 39.66 10.86 43.13 CREAK sO Simplified 6.66 3.32 1.95 4.05 Detailed 2.83 1.93 1.32 2.03 None 0.02 0.07 0.09 0.07 rQ Simplified 0.03 0.04 0.08 0.05 Detailed 0.03 0.03 0.07 0.04 None 0.03 0.03 0.07 0.04 rK Simplified 0.04 0.04 0.06 0.04 Detailed 0.04 0.04 0.06 0.04 None 0.03 0.05 0.07 0.05 rV Simplified 0.04 0.06 0.06 0.05 Detailed 0.02 0.05 0.04 0.04 None 0.02 0.06 0.08 0.05 rO Simplified 0.02 0.04 0.04 0.03 Detailed 0.02 0.02 0.02 0.02|sQ|None Simplified Detailed|20.78 22.32 16.61 20.27 2.42 2.21 2.19 2.30 1.10 0.74 1.49 1.03|
||sK|None Simplified Detailed|16.37 29.50 25.20 26.23 1.90 3.55 2.29 2.70 0.76 1.04 1.20 1.02|
||sV|None Simplified Detailed|99.45 112.28 26.10 92.60 8.94 8.00 4.34 7.52 3.94 3.03 1.37 2.88|
||sO|None Simplified Detailed|70.14 39.66 10.86 43.13 6.66 3.32 1.95 4.05 2.83 1.93 1.32 2.03|
||rQ|None Simplified Detailed|0.02 0.07 0.09 0.07 0.03 0.04 0.08 0.05 0.03 0.03 0.07 0.04|
||rK|None Simplified Detailed|0.03 0.03 0.07 0.04 0.04 0.04 0.06 0.04 0.04 0.04 0.06 0.04|
||rV|None Simplified Detailed|0.03 0.05 0.07 0.05 0.04 0.06 0.06 0.05 0.02 0.05 0.04 0.04|
||rO|None Simplified Detailed|0.02 0.06 0.08 0.05 0.02 0.04 0.04 0.03 0.02 0.02 0.02 0.02|
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|Len 100 1.97 0.93 1.44 1.32 sQ Len 500 1.36 0.52 0.91 0.81 Len 1000 1.15 0.43 0.78 0.68 Unpopular 1.96 1.04 1.41 1.33 Len 100 1.63 1.36 1.33 1.38 sK Len 500 0.99 0.72 0.79 0.76 Len 1000 0.80 0.55 0.67 0.60 Unpopular 1.56 1.84 1.29 1.54 Len 100 4.96 3.75 1.90 3.62 sV Len 500 3.73 2.66 1.56 2.64 Len 1000 3.17 2.29 1.43 2.29 Unpopular 5.69 4.80 2.96 4.53 Len 100 3.77 2.09 2.69 2.68 sO Len 500 2.84 1.68 2.23 2.10 Wiki Len 1000 2.57 1.46 2.00 1.87 Unpopular 4.17 2.31 3.15 2.93 Len 100 0.04 0.03 0.03 0.03 rQ Len 500 0.04 0.03 0.04 0.03 Len 1000 0.03 0.02 0.04 0.03 Unpopular 0.02 0.03 0.03 0.03 Len 100 0.05 0.02 0.04 0.03 rK Len 500 0.05 0.03 0.04 0.03 Len 1000 0.05 0.03 0.05 0.04 Unpopular 0.03 0.03 0.04 0.03 Len 100 0.05 0.07 0.08 0.06 rV Len 500 0.03 0.05 0.06 0.05 Len 1000 0.03 0.05 0.06 0.04 Unpopular 0.05 0.07 0.09 0.06 Len 100 0.02 0.02 0.06 0.03 rO Len 500 0.02 0.02 0.03 0.02 Len 1000 0.02 0.02 0.03 0.02 Unpopular 0.03 0.03 0.05 0.03|sQ|Len 100 Len 500 Len 1000 Unpopular|1.97 0.93 1.44 1.32 1.36 0.52 0.91 0.81 1.15 0.43 0.78 0.68 1.96 1.04 1.41 1.33|
||sK|Len 100 Len 500 Len 1000 Unpopular|1.63 1.36 1.33 1.38 0.99 0.72 0.79 0.76 0.80 0.55 0.67 0.60 1.56 1.84 1.29 1.54|
||sV|Len 100 Len 500 Len 1000 Unpopular|4.96 3.75 1.90 3.62 3.73 2.66 1.56 2.64 3.17 2.29 1.43 2.29 5.69 4.80 2.96 4.53|
||sO|Len 100 Len 500 Len 1000 Unpopular|3.77 2.09 2.69 2.68 2.84 1.68 2.23 2.10 2.57 1.46 2.00 1.87 4.17 2.31 3.15 2.93|
||rQ|Len 100 Len 500 Len 1000 Unpopular|0.04 0.03 0.03 0.03 0.04 0.03 0.04 0.03 0.03 0.02 0.04 0.03 0.02 0.03 0.03 0.03|
||rK|Len 100 Len 500 Len 1000 Unpopular|0.05 0.02 0.04 0.03 0.05 0.03 0.04 0.03 0.05 0.03 0.05 0.04 0.03 0.03 0.04 0.03|
||rV|Len 100 Len 500 Len 1000 Unpopular|0.05 0.07 0.08 0.06 0.03 0.05 0.06 0.05 0.03 0.05 0.06 0.04 0.05 0.07 0.09 0.06|
||rO|Len 100 Len 500 Len 1000 Unpopular|0.02 0.02 0.06 0.03 0.02 0.02 0.03 0.02 0.02 0.02 0.03 0.02 0.03 0.03 0.05 0.03|
-----
### C Results on Llama-3.1-8B
#### C.1 Pre-trained LLM on Correct Responses
C.1.1 Reasoning Tasks
The visualizations and statistical results on MATH tasks: MATH-Algebra (Figure 135, Table 43), MATH-Counting (Figure 136, Table 44), MATH- Geometry (Figure 137, Table 45).
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
Simplified 0.76 0.76 0.35 0.58
_sQ_ Detailed 0.42 0.37 0.17 0.29
Simplified 0.28 0.28 0.12 0.22
_sK_ Detailed 0.17 0.10 0.07 0.11
Simplified 2.10 1.24 0.51 1.20
_sV_ Detailed 0.87 0.56 0.27 0.53
Algebra Simplified 1.06 1.31 0.61 0.93
_sO_ Detailed 0.61 0.70 0.33 0.51
Simplified 0.01 0.01 0.04 0.02
_rQ_ Detailed 0.01 0.01 0.04 0.02
Simplified 0.02 0.01 0.01 0.01
_rK_ Detailed 0.02 0.01 0.02 0.01
Simplified 0.02 0.01 0.02 0.02
_rV_ Detailed 0.01 0.01 0.02 0.02
Simplified 0.01 0.01 0.01 0.01
_rO_ Detailed 0.01 0.01 0.01 0.01
##### Table 43: Statistical results for MATH-Algebra using Llama-3.1-8B on correct responses.
Figure 55: Visualization for MATH-Algebra using Llama-3.1-8B on correct responses.
#### The visualizations and statistical results on other reasoning tasks: AQuA (Figure 138, Table 46), GSM8K (Figure 139, Table 47), StrategyQA (Fig- ure 140, Table 48), ECQA (Figure 141, Table 49), CREAK (Figure 142, Table 50), Sensemaking (Fig- ure 143, Table 51).
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|Simplified 1.12 0.94 0.56 0.82 sQ Detailed 0.54 0.42 0.24 0.37 Simplified 0.40 0.37 0.17 0.30 sK Detailed 0.22 0.12 0.08 0.14 Simplified 2.91 1.49 0.57 1.52 sV Detailed 1.11 0.62 0.35 0.63 Counting Simplified 1.41 1.62 0.72 1.14 sO Detailed 0.84 0.84 0.44 0.66 Simplified 0.01 0.01 0.05 0.03 rQ Detailed 0.01 0.01 0.06 0.03 Simplified 0.02 0.01 0.02 0.01 rK Detailed 0.02 0.00 0.02 0.01 Simplified 0.02 0.01 0.01 0.02 rV Detailed 0.02 0.01 0.01 0.01 Simplified 0.01 0.01 0.01 0.01 rO Detailed 0.01 0.01 0.01 0.01|sQ|Simplified Detailed|1.12 0.94 0.56 0.82 0.54 0.42 0.24 0.37|
||sK|Simplified Detailed|0.40 0.37 0.17 0.30 0.22 0.12 0.08 0.14|
||sV|Simplified Detailed|2.91 1.49 0.57 1.52 1.11 0.62 0.35 0.63|
||sO|Simplified Detailed|1.41 1.62 0.72 1.14 0.84 0.84 0.44 0.66|
||rQ|Simplified Detailed|0.01 0.01 0.05 0.03 0.01 0.01 0.06 0.03|
||rK|Simplified Detailed|0.02 0.01 0.02 0.01 0.02 0.00 0.02 0.01|
##### Table 44: Statistical results for MATH-Counting using Llama-3.1-8B on correct responses.
Figure 56: Visualization for MATH-Counting using Llama-3.1-8B on correct responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
Simplified 1.14 1.10 0.61 0.88
_sQ_ Detailed 0.61 0.56 0.32 0.46
Simplified 0.44 0.41 0.22 0.33
_sK_ Detailed 0.24 0.16 0.10 0.16
Simplified 3.17 1.85 0.70 1.77
_sV_ Detailed 1.40 0.80 0.38 0.79
Geometry Simplified 1.70 1.93 0.89 1.40
_sO_ Detailed 0.90 1.01 0.50 0.75
Simplified 0.01 0.01 0.05 0.03
_rQ_ Detailed 0.01 0.01 0.06 0.03
Simplified 0.02 0.00 0.01 0.01
_rK_ Detailed 0.02 0.00 0.02 0.02
Simplified 0.02 0.01 0.02 0.02
_rV_ Detailed 0.01 0.01 0.01 0.01
Simplified 0.01 0.01 0.01 0.01
_rO_ Detailed 0.01 0.01 0.01 0.01
##### Table 45: Statistical results for MATH-Geometry using Llama-3.1-8B on correct responses.
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|Simplified 0.76 0.76 0.35 0.58 sQ Detailed 0.42 0.37 0.17 0.29 Simplified 0.28 0.28 0.12 0.22 sK Detailed 0.17 0.10 0.07 0.11 Simplified 2.10 1.24 0.51 1.20 sV Detailed 0.87 0.56 0.27 0.53 Algebra Simplified 1.06 1.31 0.61 0.93 sO Detailed 0.61 0.70 0.33 0.51 Simplified 0.01 0.01 0.04 0.02 rQ Detailed 0.01 0.01 0.04 0.02 Simplified 0.02 0.01 0.01 0.01 rK Detailed 0.02 0.01 0.02 0.01 Simplified 0.02 0.01 0.02 0.02 rV Detailed 0.01 0.01 0.02 0.02 Simplified 0.01 0.01 0.01 0.01 rO Detailed 0.01 0.01 0.01 0.01||||
||sK|Simplified Detailed|0.28 0.28 0.12 0.22 0.17 0.10 0.07 0.11|
||sV|Simplified Detailed|2.10 1.24 0.51 1.20 0.87 0.56 0.27 0.53|
||sO|Simplified Detailed|1.06 1.31 0.61 0.93 0.61 0.70 0.33 0.51|
||rQ|Simplified Detailed|0.01 0.01 0.04 0.02 0.01 0.01 0.04 0.02|
||rK|Simplified Detailed|0.02 0.01 0.01 0.01 0.02 0.01 0.02 0.01|
||rV|Simplified Detailed|0.02 0.01 0.02 0.02 0.01 0.01 0.02 0.02|
||rO|Simplified Detailed|0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01|
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|Simplified 1.14 1.10 0.61 0.88 sQ Detailed 0.61 0.56 0.32 0.46 Simplified 0.44 0.41 0.22 0.33 sK Detailed 0.24 0.16 0.10 0.16 Simplified 3.17 1.85 0.70 1.77 sV Detailed 1.40 0.80 0.38 0.79 Geometry Simplified 1.70 1.93 0.89 1.40 sO Detailed 0.90 1.01 0.50 0.75 Simplified 0.01 0.01 0.05 0.03 rQ Detailed 0.01 0.01 0.06 0.03 Simplified 0.02 0.00 0.01 0.01 rK Detailed 0.02 0.00 0.02 0.02 Simplified 0.02 0.01 0.02 0.02 rV Detailed 0.01 0.01 0.01 0.01 Simplified 0.01 0.01 0.01 0.01 rO Detailed 0.01 0.01 0.01 0.01|sQ|Simplified Detailed|1.14 1.10 0.61 0.88 0.61 0.56 0.32 0.46|
||sK|Simplified Detailed|0.44 0.41 0.22 0.33 0.24 0.16 0.10 0.16|
||sV|Simplified Detailed|3.17 1.85 0.70 1.77 1.40 0.80 0.38 0.79|
||sO|Simplified Detailed|1.70 1.93 0.89 1.40 0.90 1.01 0.50 0.75|
||rQ|Simplified Detailed|0.01 0.01 0.05 0.03 0.01 0.01 0.06 0.03|
||rK|Simplified Detailed|0.02 0.00 0.01 0.01 0.02 0.00 0.02 0.02|
||rV|Simplified Detailed|0.02 0.01 0.02 0.02 0.01 0.01 0.01 0.01|
||rO|Simplified Detailed|0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01|
-----
##### Figure 57: Visualization for MATH-Geometry using Llama-3.1-8B on correct responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
None 4.61 5.39 7.22 5.67
_sQ_ Simplified 1.42 1.03 0.57 0.93
Detailed 0.56 0.44 0.25 0.39
None 2.51 1.81 3.74 2.83
_sK_ Simplified 0.53 0.35 0.26 0.36
Detailed 0.23 0.12 0.09 0.15
None 21.85 6.80 3.35 9.85
_sV_ Simplified 4.59 1.80 0.83 2.24
Detailed 1.27 0.68 0.38 0.71
None 6.22 5.67 2.69 4.60
AQuA _sO_ Simplified 1.95 2.09 0.78 1.48
Detailed 0.91 0.92 0.46 0.71
None 0.02 0.09 0.14 0.09
_rQ_ Simplified 0.01 0.01 0.05 0.03
Detailed 0.01 0.01 0.06 0.03
None 0.03 0.04 0.13 0.07
_rK_ Simplified 0.02 0.01 0.03 0.02
Detailed 0.02 0.00 0.02 0.02
None 0.03 0.05 0.04 0.04
_rV_ Simplified 0.02 0.02 0.01 0.02
Detailed 0.02 0.01 0.02 0.02
None 0.02 0.04 0.09 0.05
_rO_ Simplified 0.01 0.02 0.01 0.02
Detailed 0.01 0.01 0.01 0.01
##### Table 46: Statistical results for AQuA using Llama-3.1- 8B on correct responses.
#### C.1.2 Wiki Tasks
The visualizations and statistical results on Wiki tasks are shown in Figure 144 and Table 52.
C.2 Pre-trained LLM on Irrelevant Responses
C.2.1 Reasoning Tasks
The visualizations and statistical results on MATH tasks: MATH-Algebra (Figure 145, Table 53), MATH-Counting (Figure 146, Table 54), MATH- Geometry (Figure 147, Table 55). The visualizations and statistical results on other
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 4.91 6.38 2.62 4.37 sQ Simplified 0.79 0.67 0.23 0.53 Detailed 0.47 0.37 0.18 0.31 None 2.38 1.98 1.67 2.02 sK Simplified 0.29 0.22 0.16 0.21 Detailed 0.21 0.11 0.08 0.13 None 19.12 6.14 5.86 9.85 sV Simplified 1.63 1.07 0.59 1.07 Detailed 1.13 0.60 0.35 0.64 None 5.29 5.43 2.82 4.56 GSM8K sO Simplified 1.26 1.31 0.58 0.99 Detailed 0.73 0.79 0.43 0.61 None 0.02 0.08 0.11 0.07 rQ Simplified 0.01 0.01 0.03 0.02 Detailed 0.01 0.01 0.05 0.02 None 0.02 0.05 0.03 0.03 rK Simplified 0.02 0.01 0.01 0.01 Detailed 0.02 0.00 0.02 0.02 None 0.03 0.03 0.02 0.03 rV Simplified 0.02 0.01 0.02 0.02 Detailed 0.02 0.01 0.02 0.02 None 0.03 0.03 0.07 0.05 rO Simplified 0.01 0.02 0.02 0.02 Detailed 0.01 0.02 0.02 0.02|sQ|None Simplified Detailed|4.91 6.38 2.62 4.37 0.79 0.67 0.23 0.53 0.47 0.37 0.18 0.31|
||sK|None Simplified Detailed|2.38 1.98 1.67 2.02 0.29 0.22 0.16 0.21 0.21 0.11 0.08 0.13|
||sV|None Simplified Detailed|19.12 6.14 5.86 9.85 1.63 1.07 0.59 1.07 1.13 0.60 0.35 0.64|
||sO|None Simplified Detailed|5.29 5.43 2.82 4.56 1.26 1.31 0.58 0.99 0.73 0.79 0.43 0.61|
||rQ|None Simplified Detailed|0.02 0.08 0.11 0.07 0.01 0.01 0.03 0.02 0.01 0.01 0.05 0.02|
||rK|None Simplified Detailed|0.02 0.05 0.03 0.03 0.02 0.01 0.01 0.01 0.02 0.00 0.02 0.02|
||rV|None Simplified Detailed|0.03 0.03 0.02 0.03 0.02 0.01 0.02 0.02 0.02 0.01 0.02 0.02|
||rO|None Simplified Detailed|0.03 0.03 0.07 0.05 0.01 0.02 0.02 0.02 0.01 0.02 0.02 0.02|
##### Table 47: Statistical results for GSM8K using Llama- 3.1-8B on correct responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
None 4.71 2.43 0.93 2.65
_sQ_ Simplified 0.84 0.76 0.40 0.64
Detailed 0.57 0.50 0.22 0.40
None 2.80 1.25 1.17 1.77
_sK_ Simplified 0.40 0.24 0.19 0.27
Detailed 0.23 0.13 0.11 0.15
None 26.10 6.82 2.59 10.73
_sV_ Simplified 3.55 1.13 0.81 1.66
Detailed 1.25 0.77 0.55 0.80
None 10.88 5.17 3.06 5.94
StrategyQA _sO_ Simplified 1.56 1.26 1.06 1.22
Detailed 0.95 1.10 0.79 0.90
None 0.02 0.05 0.08 0.05
_rQ_ Simplified 0.01 0.01 0.03 0.02
Detailed 0.01 0.01 0.06 0.03
None 0.02 0.03 0.06 0.04
_rK_ Simplified 0.01 0.01 0.02 0.01
Detailed 0.02 0.01 0.02 0.01
None 0.04 0.03 0.04 0.04
_rV_ Simplified 0.04 0.02 0.02 0.02
Detailed 0.02 0.01 0.01 0.01
None 0.03 0.03 0.03 0.03
_rO_ Simplified 0.01 0.03 0.02 0.02
Detailed 0.01 0.02 0.02 0.02
##### Table 48: Statistical results for StrategyQA using Llama- 3.1-8B on correct responses.
#### reasoning tasks: AQuA (Figure 148, Table 56), GSM8K (Figure 149, Table 57), StrategyQA (Fig- ure 150, Table 58), ECQA (Figure 151, Table 59), CREAK (Figure 152, Table 60), Sensemaking (Fig- ure 153, Table 61).
C.2.2 Wiki Tasks The visualizations and statistical results on Wiki tasks are shown in Figure 154 and Table 62.
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 4.61 5.39 7.22 5.67 sQ Simplified 1.42 1.03 0.57 0.93 Detailed 0.56 0.44 0.25 0.39 None 2.51 1.81 3.74 2.83 sK Simplified 0.53 0.35 0.26 0.36 Detailed 0.23 0.12 0.09 0.15 None 21.85 6.80 3.35 9.85 sV Simplified 4.59 1.80 0.83 2.24 Detailed 1.27 0.68 0.38 0.71 None 6.22 5.67 2.69 4.60 AQuA sO Simplified 1.95 2.09 0.78 1.48 Detailed 0.91 0.92 0.46 0.71 None 0.02 0.09 0.14 0.09 rQ Simplified 0.01 0.01 0.05 0.03 Detailed 0.01 0.01 0.06 0.03 None 0.03 0.04 0.13 0.07 rK Simplified 0.02 0.01 0.03 0.02 Detailed 0.02 0.00 0.02 0.02 None 0.03 0.05 0.04 0.04 rV Simplified 0.02 0.02 0.01 0.02 Detailed 0.02 0.01 0.02 0.02 None 0.02 0.04 0.09 0.05 rO Simplified 0.01 0.02 0.01 0.02 Detailed 0.01 0.01 0.01 0.01||||
||sK|None Simplified Detailed|2.51 1.81 3.74 2.83 0.53 0.35 0.26 0.36 0.23 0.12 0.09 0.15|
||sV|None Simplified Detailed|21.85 6.80 3.35 9.85 4.59 1.80 0.83 2.24 1.27 0.68 0.38 0.71|
||sO|None Simplified Detailed|6.22 5.67 2.69 4.60 1.95 2.09 0.78 1.48 0.91 0.92 0.46 0.71|
||rQ|None Simplified Detailed|0.02 0.09 0.14 0.09 0.01 0.01 0.05 0.03 0.01 0.01 0.06 0.03|
||rK|None Simplified Detailed|0.03 0.04 0.13 0.07 0.02 0.01 0.03 0.02 0.02 0.00 0.02 0.02|
||rV|None Simplified Detailed|0.03 0.05 0.04 0.04 0.02 0.02 0.01 0.02 0.02 0.01 0.02 0.02|
||rO|None Simplified Detailed|0.02 0.04 0.09 0.05 0.01 0.02 0.01 0.02 0.01 0.01 0.01 0.01|
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 4.71 2.43 0.93 2.65 sQ Simplified 0.84 0.76 0.40 0.64 Detailed 0.57 0.50 0.22 0.40 None 2.80 1.25 1.17 1.77 sK Simplified 0.40 0.24 0.19 0.27 Detailed 0.23 0.13 0.11 0.15 None 26.10 6.82 2.59 10.73 sV Simplified 3.55 1.13 0.81 1.66 Detailed 1.25 0.77 0.55 0.80 None 10.88 5.17 3.06 5.94 StrategyQA sO Simplified 1.56 1.26 1.06 1.22 Detailed 0.95 1.10 0.79 0.90 None 0.02 0.05 0.08 0.05 rQ Simplified 0.01 0.01 0.03 0.02 Detailed 0.01 0.01 0.06 0.03 None 0.02 0.03 0.06 0.04 rK Simplified 0.01 0.01 0.02 0.01 Detailed 0.02 0.01 0.02 0.01 None 0.04 0.03 0.04 0.04 rV Simplified 0.04 0.02 0.02 0.02 Detailed 0.02 0.01 0.01 0.01 None 0.03 0.03 0.03 0.03 rO Simplified 0.01 0.03 0.02 0.02 Detailed 0.01 0.02 0.02 0.02|sQ|None Simplified Detailed|4.71 2.43 0.93 2.65 0.84 0.76 0.40 0.64 0.57 0.50 0.22 0.40|
||sK|None Simplified Detailed|2.80 1.25 1.17 1.77 0.40 0.24 0.19 0.27 0.23 0.13 0.11 0.15|
||sV|None Simplified Detailed|26.10 6.82 2.59 10.73 3.55 1.13 0.81 1.66 1.25 0.77 0.55 0.80|
||sO|None Simplified Detailed|10.88 5.17 3.06 5.94 1.56 1.26 1.06 1.22 0.95 1.10 0.79 0.90|
||rQ|None Simplified Detailed|0.02 0.05 0.08 0.05 0.01 0.01 0.03 0.02 0.01 0.01 0.06 0.03|
||rK|None Simplified Detailed|0.02 0.03 0.06 0.04 0.01 0.01 0.02 0.01 0.02 0.01 0.02 0.01|
||rV|None Simplified Detailed|0.04 0.03 0.04 0.04 0.04 0.02 0.02 0.02 0.02 0.01 0.01 0.01|
||rO|None Simplified Detailed|0.03 0.03 0.03 0.03 0.01 0.03 0.02 0.02 0.01 0.02 0.02 0.02|
-----
##### Figure 58: Visualization for AQuA using Llama-3.1-8B on correct responses.
Figure 59: Visualization for GSM8K using Llama-3.1-8B on correct responses.
-----
##### Figure 60: Visualization for StrategyQA using Llama-3.1-8B on correct responses.
Figure 61: Visualization for ECQA using Llama-3.1-8B on correct responses.
-----
##### Figure 62: Visualization for CREAK using Llama-3.1-8B on correct responses.
Figure 63: Visualization for Sensemaking using Llama-3.1-8B on correct responses.
-----
##### Figure 64: Visualization for Wiki tasks using Llama-3.1-8B on correct responses.
-----
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 4.61 6.54 3.41 4.65 sQ Simplified 1.46 1.46 0.54 1.09 Detailed 0.68 0.53 0.26 0.46 None 2.22 2.63 2.83 2.55 sK Simplified 0.58 0.48 0.37 0.48 Detailed 0.31 0.14 0.13 0.19 None 25.94 8.37 3.03 11.13 sV Simplified 5.68 2.02 1.03 2.70 Detailed 1.56 0.77 0.57 0.89 None 8.32 7.22 2.38 5.61 Sensemaking sO Simplified 1.97 2.34 1.06 1.68 Detailed 1.10 1.20 0.72 0.94 None 0.02 0.08 0.13 0.08 rQ Simplified 0.01 0.02 0.03 0.02 Detailed 0.01 0.00 0.06 0.03 None 0.03 0.03 0.09 0.05 rK Simplified 0.02 0.02 0.03 0.02 Detailed 0.02 0.01 0.02 0.02 None 0.03 0.04 0.03 0.03 rV Simplified 0.03 0.02 0.02 0.02 Detailed 0.02 0.01 0.01 0.01 None 0.02 0.04 0.05 0.04 rO Simplified 0.02 0.03 0.02 0.02 Detailed 0.01 0.02 0.02 0.02|sQ|None Simplified Detailed|4.61 6.54 3.41 4.65 1.46 1.46 0.54 1.09 0.68 0.53 0.26 0.46|
||sK|None Simplified Detailed|2.22 2.63 2.83 2.55 0.58 0.48 0.37 0.48 0.31 0.14 0.13 0.19|
||sV|None Simplified Detailed|25.94 8.37 3.03 11.13 5.68 2.02 1.03 2.70 1.56 0.77 0.57 0.89|
||sO|None Simplified Detailed|8.32 7.22 2.38 5.61 1.97 2.34 1.06 1.68 1.10 1.20 0.72 0.94|
||rQ|None Simplified Detailed|0.02 0.08 0.13 0.08 0.01 0.02 0.03 0.02 0.01 0.00 0.06 0.03|
||rK|None Simplified Detailed|0.03 0.03 0.09 0.05 0.02 0.02 0.03 0.02 0.02 0.01 0.02 0.02|
||rV|None Simplified Detailed|0.03 0.04 0.03 0.03 0.03 0.02 0.02 0.02 0.02 0.01 0.01 0.01|
||rO|None Simplified Detailed|0.02 0.04 0.05 0.04 0.02 0.03 0.02 0.02 0.01 0.02 0.02 0.02|
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 7.39 6.32 2.14 4.94 sQ Simplified 1.46 1.12 0.37 0.92 Detailed 0.83 0.58 0.38 0.56 None 3.23 3.17 2.97 3.05 sK Simplified 0.65 0.30 0.33 0.42 Detailed 0.34 0.15 0.16 0.21 None 31.64 12.53 4.07 14.50 sV Simplified 5.53 1.53 0.97 2.45 Detailed 1.78 0.94 0.69 1.06 None 8.18 9.24 2.77 6.16 ECQA sO Simplified 1.42 1.78 0.99 1.30 Detailed 1.17 1.35 0.88 1.06 None 0.02 0.08 0.11 0.07 rQ Simplified 0.01 0.01 0.03 0.02 Detailed 0.02 0.01 0.10 0.04 None 0.02 0.05 0.05 0.04 rK Simplified 0.02 0.01 0.02 0.02 Detailed 0.02 0.01 0.03 0.02 None 0.02 0.03 0.03 0.03 rV Simplified 0.02 0.02 0.02 0.02 Detailed 0.02 0.01 0.01 0.02 None 0.02 0.04 0.07 0.05 rO Simplified 0.01 0.02 0.02 0.02 Detailed 0.01 0.02 0.02 0.02|sQ|None Simplified Detailed|7.39 6.32 2.14 4.94 1.46 1.12 0.37 0.92 0.83 0.58 0.38 0.56|
||sK|None Simplified Detailed|3.23 3.17 2.97 3.05 0.65 0.30 0.33 0.42 0.34 0.15 0.16 0.21|
||sV|None Simplified Detailed|31.64 12.53 4.07 14.50 5.53 1.53 0.97 2.45 1.78 0.94 0.69 1.06|
||sO|None Simplified Detailed|8.18 9.24 2.77 6.16 1.42 1.78 0.99 1.30 1.17 1.35 0.88 1.06|
||rQ|None Simplified Detailed|0.02 0.08 0.11 0.07 0.01 0.01 0.03 0.02 0.02 0.01 0.10 0.04|
||rK|None Simplified Detailed|0.02 0.05 0.05 0.04 0.02 0.01 0.02 0.02 0.02 0.01 0.03 0.02|
||rV|None Simplified Detailed|0.02 0.03 0.03 0.03 0.02 0.02 0.02 0.02 0.02 0.01 0.01 0.02|
||rO|None Simplified Detailed|0.02 0.04 0.07 0.05 0.01 0.02 0.02 0.02 0.01 0.02 0.02 0.02|
##### Table 49: Statistical results for ECQA using Llama-3.1- 8B on correct responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
None 5.05 5.06 3.79 4.40
_sQ_ Simplified 1.05 1.06 0.44 0.80
Detailed 0.65 0.56 0.31 0.47
None 2.74 2.39 4.00 2.97
_sK_ Simplified 0.50 0.29 0.26 0.34
Detailed 0.30 0.16 0.12 0.19
None 26.92 10.35 4.35 12.72
_sV_ Simplified 4.74 1.48 0.89 2.19
Detailed 1.83 0.84 0.65 1.02
None 9.03 8.36 3.81 6.62
CREAK _sO_ Simplified 1.44 1.65 1.05 1.29
Detailed 1.08 1.28 0.87 1.01
None 0.02 0.06 0.14 0.07
_rQ_ Simplified 0.01 0.01 0.03 0.02
Detailed 0.01 0.01 0.07 0.03
None 0.02 0.03 0.08 0.04
_rK_ Simplified 0.02 0.01 0.02 0.02
Detailed 0.02 0.01 0.02 0.02
None 0.03 0.04 0.04 0.04
_rV_ Simplified 0.03 0.02 0.02 0.02
Detailed 0.02 0.01 0.01 0.02
None 0.02 0.04 0.05 0.04
_rO_ Simplified 0.01 0.03 0.02 0.02
Detailed 0.01 0.02 0.02 0.01
##### Table 50: Statistical results for CREAK using Llama- 3.1-8B on correct responses.
##### Table 51: Statistical results for Sensemaking using Llama-3.1-8B on correct responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
Len 100 0.88 0.76 1.12 0.92
_sQ_ Len 500 0.71 0.71 0.78 0.71
Len 1000 0.62 0.65 0.59 0.59
Unpopular 1.33 0.79 0.72 0.92
Len 100 0.36 0.23 0.30 0.31
_sK_ Len 500 0.28 0.22 0.25 0.25
Len 1000 0.24 0.18 0.20 0.20
Unpopular 0.45 0.25 0.21 0.32
Len 100 4.19 0.97 0.89 1.89
_sV_ Len 500 2.55 1.09 0.87 1.41
Len 1000 1.96 1.00 0.76 1.17
Unpopular 10.07 1.41 0.72 3.68
Len 100 1.88 1.18 1.04 1.34
_sO_ Len 500 1.64 1.34 1.06 1.29
Wiki Len 1000 1.53 1.27 0.98 1.19
Unpopular 3.14 1.12 0.85 1.63
Len 100 0.02 0.01 0.06 0.03
_rQ_ Len 500 0.01 0.01 0.05 0.03
Len 1000 0.01 0.01 0.05 0.02
Unpopular 0.02 0.01 0.04 0.03
Len 100 0.01 0.01 0.01 0.01
_rK_ Len 500 0.01 0.01 0.01 0.01
Len 1000 0.01 0.00 0.01 0.01
Unpopular 0.02 0.02 0.03 0.02
Len 100 0.03 0.02 0.01 0.02
_rV_ Len 500 0.02 0.02 0.01 0.02
Len 1000 0.02 0.01 0.01 0.01
Unpopular 0.04 0.02 0.03 0.03
Len 100 0.01 0.03 0.02 0.02
_rO_ Len 500 0.01 0.02 0.02 0.01
Len 1000 0.01 0.01 0.01 0.01
Unpopular 0.01 0.04 0.02 0.02
##### Table 52: Statistical results for Wiki using Llama-3.1- 8B on correct responses.
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 5.05 5.06 3.79 4.40 sQ Simplified 1.05 1.06 0.44 0.80 Detailed 0.65 0.56 0.31 0.47 None 2.74 2.39 4.00 2.97 sK Simplified 0.50 0.29 0.26 0.34 Detailed 0.30 0.16 0.12 0.19 None 26.92 10.35 4.35 12.72 sV Simplified 4.74 1.48 0.89 2.19 Detailed 1.83 0.84 0.65 1.02 None 9.03 8.36 3.81 6.62 CREAK sO Simplified 1.44 1.65 1.05 1.29 Detailed 1.08 1.28 0.87 1.01 None 0.02 0.06 0.14 0.07 rQ Simplified 0.01 0.01 0.03 0.02 Detailed 0.01 0.01 0.07 0.03 None 0.02 0.03 0.08 0.04 rK Simplified 0.02 0.01 0.02 0.02 Detailed 0.02 0.01 0.02 0.02 None 0.03 0.04 0.04 0.04 rV Simplified 0.03 0.02 0.02 0.02 Detailed 0.02 0.01 0.01 0.02 None 0.02 0.04 0.05 0.04 rO Simplified 0.01 0.03 0.02 0.02 Detailed 0.01 0.02 0.02 0.01|sQ|None Simplified Detailed|5.05 5.06 3.79 4.40 1.05 1.06 0.44 0.80 0.65 0.56 0.31 0.47|
||sK|None Simplified Detailed|2.74 2.39 4.00 2.97 0.50 0.29 0.26 0.34 0.30 0.16 0.12 0.19|
||sV|None Simplified Detailed|26.92 10.35 4.35 12.72 4.74 1.48 0.89 2.19 1.83 0.84 0.65 1.02|
||sO|None Simplified Detailed|9.03 8.36 3.81 6.62 1.44 1.65 1.05 1.29 1.08 1.28 0.87 1.01|
||rQ|None Simplified Detailed|0.02 0.06 0.14 0.07 0.01 0.01 0.03 0.02 0.01 0.01 0.07 0.03|
||rK|None Simplified Detailed|0.02 0.03 0.08 0.04 0.02 0.01 0.02 0.02 0.02 0.01 0.02 0.02|
||rV|None Simplified Detailed|0.03 0.04 0.04 0.04 0.03 0.02 0.02 0.02 0.02 0.01 0.01 0.02|
||rO|None Simplified Detailed|0.02 0.04 0.05 0.04 0.01 0.03 0.02 0.02 0.01 0.02 0.02 0.01|
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|Len 100 0.88 0.76 1.12 0.92 sQ Len 500 0.71 0.71 0.78 0.71 Len 1000 0.62 0.65 0.59 0.59 Unpopular 1.33 0.79 0.72 0.92 Len 100 0.36 0.23 0.30 0.31 sK Len 500 0.28 0.22 0.25 0.25 Len 1000 0.24 0.18 0.20 0.20 Unpopular 0.45 0.25 0.21 0.32 Len 100 4.19 0.97 0.89 1.89 sV Len 500 2.55 1.09 0.87 1.41 Len 1000 1.96 1.00 0.76 1.17 Unpopular 10.07 1.41 0.72 3.68 Len 100 1.88 1.18 1.04 1.34 sO Len 500 1.64 1.34 1.06 1.29 Wiki Len 1000 1.53 1.27 0.98 1.19 Unpopular 3.14 1.12 0.85 1.63 Len 100 0.02 0.01 0.06 0.03 rQ Len 500 0.01 0.01 0.05 0.03 Len 1000 0.01 0.01 0.05 0.02 Unpopular 0.02 0.01 0.04 0.03 Len 100 0.01 0.01 0.01 0.01 rK Len 500 0.01 0.01 0.01 0.01 Len 1000 0.01 0.00 0.01 0.01 Unpopular 0.02 0.02 0.03 0.02 Len 100 0.03 0.02 0.01 0.02 rV Len 500 0.02 0.02 0.01 0.02 Len 1000 0.02 0.01 0.01 0.01 Unpopular 0.04 0.02 0.03 0.03 Len 100 0.01 0.03 0.02 0.02 rO Len 500 0.01 0.02 0.02 0.01 Len 1000 0.01 0.01 0.01 0.01 Unpopular 0.01 0.04 0.02 0.02|sQ|Len 100 Len 500 Len 1000 Unpopular|0.88 0.76 1.12 0.92 0.71 0.71 0.78 0.71 0.62 0.65 0.59 0.59 1.33 0.79 0.72 0.92|
||sK|Len 100 Len 500 Len 1000 Unpopular|0.36 0.23 0.30 0.31 0.28 0.22 0.25 0.25 0.24 0.18 0.20 0.20 0.45 0.25 0.21 0.32|
||sV|Len 100 Len 500 Len 1000 Unpopular|4.19 0.97 0.89 1.89 2.55 1.09 0.87 1.41 1.96 1.00 0.76 1.17 10.07 1.41 0.72 3.68|
||sO|Len 100 Len 500 Len 1000 Unpopular|1.88 1.18 1.04 1.34 1.64 1.34 1.06 1.29 1.53 1.27 0.98 1.19 3.14 1.12 0.85 1.63|
||rQ|Len 100 Len 500 Len 1000 Unpopular|0.02 0.01 0.06 0.03 0.01 0.01 0.05 0.03 0.01 0.01 0.05 0.02 0.02 0.01 0.04 0.03|
||rK|Len 100 Len 500 Len 1000 Unpopular|0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.00 0.01 0.01 0.02 0.02 0.03 0.02|
||rV|Len 100 Len 500 Len 1000 Unpopular|0.03 0.02 0.01 0.02 0.02 0.02 0.01 0.02 0.02 0.01 0.01 0.01 0.04 0.02 0.03 0.03|
||rO|Len 100 Len 500 Len 1000 Unpopular|0.01 0.03 0.02 0.02 0.01 0.02 0.02 0.01 0.01 0.01 0.01 0.01 0.01 0.04 0.02 0.02|
-----
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|Simplified 1.31 1.14 0.82 1.03 sQ Detailed 0.68 0.55 0.30 0.48 Simplified 0.47 0.39 0.24 0.36 sK Detailed 0.24 0.16 0.11 0.16 Simplified 3.38 1.55 0.71 1.70 sV Detailed 1.48 0.79 0.41 0.82 Algebra Simplified 1.75 1.74 0.89 1.36 sO Detailed 0.82 1.04 0.52 0.74 Simplified 0.01 0.02 0.08 0.04 rQ Detailed 0.01 0.01 0.06 0.03 Simplified 0.01 0.01 0.03 0.02 rK Detailed 0.02 0.01 0.02 0.01 Simplified 0.02 0.01 0.01 0.02 rV Detailed 0.01 0.01 0.01 0.01 Simplified 0.01 0.01 0.01 0.01 rO Detailed 0.01 0.01 0.01 0.01|sQ|Simplified Detailed|1.31 1.14 0.82 1.03 0.68 0.55 0.30 0.48|
||sK|Simplified Detailed|0.47 0.39 0.24 0.36 0.24 0.16 0.11 0.16|
||sV|Simplified Detailed|3.38 1.55 0.71 1.70 1.48 0.79 0.41 0.82|
||sO|Simplified Detailed|1.75 1.74 0.89 1.36 0.82 1.04 0.52 0.74|
||rQ|Simplified Detailed|0.01 0.02 0.08 0.04 0.01 0.01 0.06 0.03|
||rK|Simplified Detailed|0.01 0.01 0.03 0.02 0.02 0.01 0.02 0.01|
||rV|Simplified Detailed|0.02 0.01 0.01 0.02 0.01 0.01 0.01 0.01|
||rO|Simplified Detailed|0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01|
##### Table 53: Statistical results for MATH-Algebra using Llama-3.1-8B on irrelevant responses.
Figure 65: Visualization for MATH-Algebra using Llama-3.1-8B on irrelevant responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
Simplified 1.67 1.26 1.12 1.28
_sQ_ Detailed 0.83 0.54 0.36 0.54
Simplified 0.56 0.44 0.33 0.43
_sK_ Detailed 0.30 0.17 0.15 0.20
Simplified 4.09 1.77 0.69 1.97
_sV_ Detailed 1.77 0.79 0.47 0.93
Counting Simplified 1.99 1.81 0.94 1.48
_sO_ Detailed 0.96 1.05 0.63 0.83
Simplified 0.01 0.03 0.10 0.05
_rQ_ Detailed 0.01 0.02 0.07 0.03
Simplified 0.01 0.01 0.03 0.02
_rK_ Detailed 0.01 0.01 0.02 0.01
Simplified 0.02 0.01 0.01 0.02
_rV_ Detailed 0.01 0.01 0.01 0.01
Simplified 0.01 0.01 0.01 0.01
_rO_ Detailed 0.01 0.01 0.01 0.01
##### Table 54: Statistical results for MATH-Counting using Llama-3.1-8B on irrelevant responses.
##### Figure 66: Visualization for MATH-Counting using Llama-3.1-8B on irrelevant responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
Simplified 1.50 1.43 1.20 1.29
_sQ_ Detailed 0.86 0.52 0.47 0.57
Simplified 0.52 0.47 0.36 0.43
_sK_ Detailed 0.27 0.17 0.23 0.22
Simplified 3.97 2.00 0.83 2.08
_sV_ Detailed 2.19 0.85 0.54 1.08
Geometry Simplified 2.29 2.12 1.10 1.68
_sO_ Detailed 1.21 1.16 0.72 0.96
Simplified 0.01 0.02 0.09 0.04
_rQ_ Detailed 0.01 0.01 0.06 0.03
Simplified 0.02 0.01 0.02 0.02
_rK_ Detailed 0.01 0.01 0.03 0.02
Simplified 0.02 0.01 0.01 0.02
_rV_ Detailed 0.02 0.01 0.02 0.02
Simplified 0.01 0.01 0.01 0.01
_rO_ Detailed 0.01 0.01 0.01 0.01
##### Table 55: Statistical results for MATH-Geometry using Llama-3.1-8B on irrelevant responses.
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|Simplified 1.50 1.43 1.20 1.29 sQ Detailed 0.86 0.52 0.47 0.57 Simplified 0.52 0.47 0.36 0.43 sK Detailed 0.27 0.17 0.23 0.22 Simplified 3.97 2.00 0.83 2.08 sV Detailed 2.19 0.85 0.54 1.08 Geometry Simplified 2.29 2.12 1.10 1.68 sO Detailed 1.21 1.16 0.72 0.96 Simplified 0.01 0.02 0.09 0.04 rQ Detailed 0.01 0.01 0.06 0.03 Simplified 0.02 0.01 0.02 0.02 rK Detailed 0.01 0.01 0.03 0.02 Simplified 0.02 0.01 0.01 0.02 rV Detailed 0.02 0.01 0.02 0.02 Simplified 0.01 0.01 0.01 0.01 rO Detailed 0.01 0.01 0.01 0.01|sQ|Simplified Detailed|1.50 1.43 1.20 1.29 0.86 0.52 0.47 0.57|
||sK|Simplified Detailed|0.52 0.47 0.36 0.43 0.27 0.17 0.23 0.22|
||sO|Simplified Detailed|2.29 2.12 1.10 1.68 1.21 1.16 0.72 0.96|
||rQ|Simplified Detailed|0.01 0.02 0.09 0.04 0.01 0.01 0.06 0.03|
||rK|Simplified Detailed|0.02 0.01 0.02 0.02 0.01 0.01 0.03 0.02|
||rV|Simplified Detailed|0.02 0.01 0.01 0.02 0.02 0.01 0.02 0.02|
||rO|Simplified Detailed|0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01|
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|Simplified 1.67 1.26 1.12 1.28 sQ Detailed 0.83 0.54 0.36 0.54 Simplified 0.56 0.44 0.33 0.43 sK Detailed 0.30 0.17 0.15 0.20 Simplified 4.09 1.77 0.69 1.97 sV Detailed 1.77 0.79 0.47 0.93 Counting Simplified 1.99 1.81 0.94 1.48 sO Detailed 0.96 1.05 0.63 0.83 Simplified 0.01 0.03 0.10 0.05 rQ Detailed 0.01 0.02 0.07 0.03 Simplified 0.01 0.01 0.03 0.02 rK Detailed 0.01 0.01 0.02 0.01 Simplified 0.02 0.01 0.01 0.02 rV Detailed 0.01 0.01 0.01 0.01 Simplified 0.01 0.01 0.01 0.01 rO Detailed 0.01 0.01 0.01 0.01|sQ|Simplified Detailed|1.67 1.26 1.12 1.28 0.83 0.54 0.36 0.54|
||sK|Simplified Detailed|0.56 0.44 0.33 0.43 0.30 0.17 0.15 0.20|
||sV|Simplified Detailed|4.09 1.77 0.69 1.97 1.77 0.79 0.47 0.93|
||sO|Simplified Detailed|1.99 1.81 0.94 1.48 0.96 1.05 0.63 0.83|
||rQ|Simplified Detailed|0.01 0.03 0.10 0.05 0.01 0.02 0.07 0.03|
||rK|Simplified Detailed|0.01 0.01 0.03 0.02 0.01 0.01 0.02 0.01|
||rV|Simplified Detailed|0.02 0.01 0.01 0.02 0.01 0.01 0.01 0.01|
||rO|Simplified Detailed|0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01|
-----
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 5.95 7.50 3.86 5.43 sQ Simplified 1.50 1.00 0.72 0.99 Detailed 0.84 0.52 0.32 0.52 None 2.66 2.20 2.29 2.40 sK Simplified 0.45 0.35 0.23 0.33 Detailed 0.30 0.17 0.13 0.19 None 21.03 5.28 5.89 10.00 sV Simplified 3.16 1.43 0.68 1.60 Detailed 1.88 0.73 0.49 0.95 None 5.37 5.16 2.62 4.25 GSM8K sO Simplified 1.50 1.52 0.89 1.22 Detailed 0.88 1.00 0.67 0.80 None 0.02 0.07 0.10 0.07 rQ Simplified 0.01 0.02 0.08 0.04 Detailed 0.01 0.01 0.06 0.03 None 0.02 0.05 0.03 0.03 rK Simplified 0.02 0.01 0.02 0.02 Detailed 0.01 0.01 0.02 0.01 None 0.02 0.03 0.02 0.03 rV Simplified 0.02 0.01 0.01 0.02 Detailed 0.01 0.01 0.01 0.01 None 0.03 0.03 0.07 0.04 rO Simplified 0.01 0.02 0.02 0.02 Detailed 0.01 0.01 0.01 0.01|sQ|None Simplified Detailed|5.95 7.50 3.86 5.43 1.50 1.00 0.72 0.99 0.84 0.52 0.32 0.52|
||sK|None Simplified Detailed|2.66 2.20 2.29 2.40 0.45 0.35 0.23 0.33 0.30 0.17 0.13 0.19|
||sV|None Simplified Detailed|21.03 5.28 5.89 10.00 3.16 1.43 0.68 1.60 1.88 0.73 0.49 0.95|
||sO|None Simplified Detailed|5.37 5.16 2.62 4.25 1.50 1.52 0.89 1.22 0.88 1.00 0.67 0.80|
||rQ|None Simplified Detailed|0.02 0.07 0.10 0.07 0.01 0.02 0.08 0.04 0.01 0.01 0.06 0.03|
||rK|None Simplified Detailed|0.02 0.05 0.03 0.03 0.02 0.01 0.02 0.02 0.01 0.01 0.02 0.01|
||rV|None Simplified Detailed|0.02 0.03 0.02 0.03 0.02 0.01 0.01 0.02 0.01 0.01 0.01 0.01|
||rO|None Simplified Detailed|0.03 0.03 0.07 0.04 0.01 0.02 0.02 0.02 0.01 0.01 0.01 0.01|
##### Figure 67: Visualization for MATH-Geometry using Llama-3.1-8B on irrelevant responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
None 4.98 4.67 5.91 5.20
_sQ_ Simplified 1.90 1.47 1.07 1.40
Detailed 0.95 0.61 0.52 0.66
None 2.50 1.54 3.45 2.72
_sK_ Simplified 0.68 0.47 0.34 0.48
Detailed 0.36 0.19 0.18 0.24
None 23.13 5.89 3.67 10.02
_sV_ Simplified 6.41 1.96 0.93 2.81
Detailed 2.42 0.86 0.54 1.17
None 5.71 5.32 2.75 4.42
AQuA _sO_ Simplified 1.95 2.08 1.14 1.63
Detailed 1.04 1.16 0.72 0.92
None 0.02 0.08 0.12 0.07
_rQ_ Simplified 0.01 0.02 0.07 0.03
Detailed 0.01 0.01 0.07 0.03
None 0.02 0.04 0.13 0.07
_rK_ Simplified 0.01 0.01 0.03 0.02
Detailed 0.01 0.01 0.02 0.01
None 0.02 0.05 0.04 0.04
_rV_ Simplified 0.02 0.02 0.01 0.02
Detailed 0.01 0.01 0.01 0.01
None 0.02 0.04 0.08 0.05
_rO_ Simplified 0.01 0.02 0.01 0.01
Detailed 0.01 0.01 0.01 0.01
##### Table 56: Statistical results for AQuA using Llama-3.1- 8B on irrelevant responses.
##### Table 57: Statistical results for GSM8K using Llama- 3.1-8B on irrelevant responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
None 5.11 3.77 1.10 3.22
_sQ_ Simplified 1.35 0.92 0.74 0.96
Detailed 0.90 0.50 0.38 0.56
None 2.53 1.38 1.11 1.73
_sK_ Simplified 0.68 0.37 0.29 0.43
Detailed 0.36 0.21 0.16 0.24
None 24.23 6.14 2.28 9.87
_sV_ Simplified 5.56 1.39 0.94 2.41
Detailed 2.46 0.87 0.70 1.24
None 9.31 4.86 2.60 5.28
StrategyQA _sO_ Simplified 2.32 1.22 1.22 1.53
Detailed 1.43 1.01 0.97 1.09
None 0.02 0.06 0.08 0.05
_rQ_ Simplified 0.01 0.02 0.04 0.03
Detailed 0.01 0.01 0.06 0.03
None 0.02 0.03 0.06 0.04
_rK_ Simplified 0.01 0.02 0.02 0.01
Detailed 0.01 0.01 0.02 0.01
None 0.04 0.03 0.04 0.04
_rV_ Simplified 0.03 0.02 0.02 0.03
Detailed 0.02 0.01 0.01 0.01
None 0.03 0.03 0.04 0.03
_rO_ Simplified 0.01 0.02 0.02 0.02
Detailed 0.01 0.01 0.01 0.01
##### Table 58: Statistical results for StrategyQA using Llama- 3.1-8B on irrelevant responses.
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 4.98 4.67 5.91 5.20 sQ Simplified 1.90 1.47 1.07 1.40 Detailed 0.95 0.61 0.52 0.66 None 2.50 1.54 3.45 2.72 sK Simplified 0.68 0.47 0.34 0.48 Detailed 0.36 0.19 0.18 0.24 None 23.13 5.89 3.67 10.02 sV Simplified 6.41 1.96 0.93 2.81 Detailed 2.42 0.86 0.54 1.17 None 5.71 5.32 2.75 4.42 AQuA sO Simplified 1.95 2.08 1.14 1.63 Detailed 1.04 1.16 0.72 0.92 None 0.02 0.08 0.12 0.07 rQ Simplified 0.01 0.02 0.07 0.03 Detailed 0.01 0.01 0.07 0.03 None 0.02 0.04 0.13 0.07 rK Simplified 0.01 0.01 0.03 0.02 Detailed 0.01 0.01 0.02 0.01 None 0.02 0.05 0.04 0.04 rV Simplified 0.02 0.02 0.01 0.02 Detailed 0.01 0.01 0.01 0.01 None 0.02 0.04 0.08 0.05 rO Simplified 0.01 0.02 0.01 0.01 Detailed 0.01 0.01 0.01 0.01|sQ|None Simplified Detailed|4.98 4.67 5.91 5.20 1.90 1.47 1.07 1.40 0.95 0.61 0.52 0.66|
||sK|None Simplified Detailed|2.50 1.54 3.45 2.72 0.68 0.47 0.34 0.48 0.36 0.19 0.18 0.24|
||sV|None Simplified Detailed|23.13 5.89 3.67 10.02 6.41 1.96 0.93 2.81 2.42 0.86 0.54 1.17|
||sO|None Simplified Detailed|5.71 5.32 2.75 4.42 1.95 2.08 1.14 1.63 1.04 1.16 0.72 0.92|
||rQ|None Simplified Detailed|0.02 0.08 0.12 0.07 0.01 0.02 0.07 0.03 0.01 0.01 0.07 0.03|
||rK|None Simplified Detailed|0.02 0.04 0.13 0.07 0.01 0.01 0.03 0.02 0.01 0.01 0.02 0.01|
||rV|None Simplified Detailed|0.02 0.05 0.04 0.04 0.02 0.02 0.01 0.02 0.01 0.01 0.01 0.01|
||rO|None Simplified Detailed|0.02 0.04 0.08 0.05 0.01 0.02 0.01 0.01 0.01 0.01 0.01 0.01|
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 5.11 3.77 1.10 3.22 sQ Simplified 1.35 0.92 0.74 0.96 Detailed 0.90 0.50 0.38 0.56 None 2.53 1.38 1.11 1.73 sK Simplified 0.68 0.37 0.29 0.43 Detailed 0.36 0.21 0.16 0.24 None 24.23 6.14 2.28 9.87 sV Simplified 5.56 1.39 0.94 2.41 Detailed 2.46 0.87 0.70 1.24 None 9.31 4.86 2.60 5.28 StrategyQA sO Simplified 2.32 1.22 1.22 1.53 Detailed 1.43 1.01 0.97 1.09 None 0.02 0.06 0.08 0.05 rQ Simplified 0.01 0.02 0.04 0.03 Detailed 0.01 0.01 0.06 0.03 None 0.02 0.03 0.06 0.04 rK Simplified 0.01 0.02 0.02 0.01 Detailed 0.01 0.01 0.02 0.01 None 0.04 0.03 0.04 0.04 rV Simplified 0.03 0.02 0.02 0.03 Detailed 0.02 0.01 0.01 0.01 None 0.03 0.03 0.04 0.03 rO Simplified 0.01 0.02 0.02 0.02 Detailed 0.01 0.01 0.01 0.01|sQ|None Simplified Detailed|5.11 3.77 1.10 3.22 1.35 0.92 0.74 0.96 0.90 0.50 0.38 0.56|
||sK|None Simplified Detailed|2.53 1.38 1.11 1.73 0.68 0.37 0.29 0.43 0.36 0.21 0.16 0.24|
||sV|None Simplified Detailed|24.23 6.14 2.28 9.87 5.56 1.39 0.94 2.41 2.46 0.87 0.70 1.24|
||sO|None Simplified Detailed|9.31 4.86 2.60 5.28 2.32 1.22 1.22 1.53 1.43 1.01 0.97 1.09|
||rQ|None Simplified Detailed|0.02 0.06 0.08 0.05 0.01 0.02 0.04 0.03 0.01 0.01 0.06 0.03|
||rK|None Simplified Detailed|0.02 0.03 0.06 0.04 0.01 0.02 0.02 0.01 0.01 0.01 0.02 0.01|
||rV|None Simplified Detailed|0.04 0.03 0.04 0.04 0.03 0.02 0.02 0.03 0.02 0.01 0.01 0.01|
||rO|None Simplified Detailed|0.03 0.03 0.04 0.03 0.01 0.02 0.02 0.02 0.01 0.01 0.01 0.01|
-----
##### Figure 68: Visualization for AQuA using Llama-3.1-8B on irrelevant responses.
Figure 69: Visualization for GSM8K using Llama-3.1-8B on irrelevant responses.
-----
##### Figure 70: Visualization for StrategyQA using Llama-3.1-8B on irrelevant responses.
Figure 71: Visualization for ECQA using Llama-3.1-8B on irrelevant responses.
-----
##### Figure 72: Visualization for CREAK using Llama-3.1-8B on irrelevant responses.
Figure 73: Visualization for Sensemaking using Llama-3.1-8B on irrelevant responses.
-----
##### Figure 74: Visualization for Wiki tasks using Llama-3.1-8B on irrelevant responses.
-----
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 4.73 6.45 3.71 4.75 sQ Simplified 2.06 1.95 0.76 1.52 Detailed 1.00 0.61 0.57 0.69 None 2.29 2.57 2.73 2.51 sK Simplified 0.84 0.76 0.32 0.64 Detailed 0.35 0.24 0.21 0.26 None 26.46 8.03 3.20 11.28 sV Simplified 7.69 2.40 1.11 3.41 Detailed 1.97 0.88 0.73 1.12 None 7.81 6.87 2.25 5.46 Sensemaking sO Simplified 2.37 2.30 1.35 1.94 Detailed 1.02 1.22 0.93 1.01 None 0.02 0.06 0.12 0.07 rQ Simplified 0.02 0.02 0.05 0.03 Detailed 0.01 0.02 0.10 0.05 None 0.03 0.05 0.11 0.06 rK Simplified 0.02 0.03 0.04 0.03 Detailed 0.01 0.01 0.04 0.02 None 0.03 0.04 0.03 0.04 rV Simplified 0.02 0.02 0.02 0.02 Detailed 0.01 0.01 0.01 0.01 None 0.02 0.04 0.06 0.04 rO Simplified 0.02 0.03 0.02 0.02 Detailed 0.01 0.01 0.01 0.01|sQ|None Simplified Detailed|4.73 6.45 3.71 4.75 2.06 1.95 0.76 1.52 1.00 0.61 0.57 0.69|
||sK|None Simplified Detailed|2.29 2.57 2.73 2.51 0.84 0.76 0.32 0.64 0.35 0.24 0.21 0.26|
||sV|None Simplified Detailed|26.46 8.03 3.20 11.28 7.69 2.40 1.11 3.41 1.97 0.88 0.73 1.12|
||sO|None Simplified Detailed|7.81 6.87 2.25 5.46 2.37 2.30 1.35 1.94 1.02 1.22 0.93 1.01|
||rQ|None Simplified Detailed|0.02 0.06 0.12 0.07 0.02 0.02 0.05 0.03 0.01 0.02 0.10 0.05|
||rK|None Simplified Detailed|0.03 0.05 0.11 0.06 0.02 0.03 0.04 0.03 0.01 0.01 0.04 0.02|
||rV|None Simplified Detailed|0.03 0.04 0.03 0.04 0.02 0.02 0.02 0.02 0.01 0.01 0.01 0.01|
||rO|None Simplified Detailed|0.02 0.04 0.06 0.04 0.02 0.03 0.02 0.02 0.01 0.01 0.01 0.01|
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 6.06 3.84 3.92 4.40 sQ Simplified 1.68 1.13 0.86 1.16 Detailed 1.09 0.52 0.53 0.68 None 2.57 0.95 1.93 1.82 sK Simplified 0.65 0.36 0.33 0.44 Detailed 0.38 0.25 0.23 0.28 None 24.55 4.13 2.53 9.50 sV Simplified 5.98 1.41 1.04 2.55 Detailed 2.33 0.92 0.79 1.27 None 5.68 4.69 2.78 4.29 ECQA sO Simplified 1.55 1.35 1.23 1.32 Detailed 1.13 1.13 1.02 1.05 None 0.02 0.04 0.08 0.05 rQ Simplified 0.01 0.02 0.06 0.03 Detailed 0.01 0.01 0.09 0.04 None 0.02 0.03 0.06 0.04 rK Simplified 0.02 0.02 0.02 0.02 Detailed 0.01 0.01 0.03 0.02 None 0.02 0.03 0.03 0.03 rV Simplified 0.02 0.02 0.01 0.02 Detailed 0.01 0.01 0.01 0.01 None 0.03 0.04 0.05 0.04 rO Simplified 0.01 0.02 0.02 0.02 Detailed 0.01 0.01 0.01 0.01|sQ|None Simplified Detailed|6.06 3.84 3.92 4.40 1.68 1.13 0.86 1.16 1.09 0.52 0.53 0.68|
||sK|None Simplified Detailed|2.57 0.95 1.93 1.82 0.65 0.36 0.33 0.44 0.38 0.25 0.23 0.28|
||sV|None Simplified Detailed|24.55 4.13 2.53 9.50 5.98 1.41 1.04 2.55 2.33 0.92 0.79 1.27|
||sO|None Simplified Detailed|5.68 4.69 2.78 4.29 1.55 1.35 1.23 1.32 1.13 1.13 1.02 1.05|
||rQ|None Simplified Detailed|0.02 0.04 0.08 0.05 0.01 0.02 0.06 0.03 0.01 0.01 0.09 0.04|
||rK|None Simplified Detailed|0.02 0.03 0.06 0.04 0.02 0.02 0.02 0.02 0.01 0.01 0.03 0.02|
||rV|None Simplified Detailed|0.02 0.03 0.03 0.03 0.02 0.02 0.01 0.02 0.01 0.01 0.01 0.01|
||rO|None Simplified Detailed|0.03 0.04 0.05 0.04 0.01 0.02 0.02 0.02 0.01 0.01 0.01 0.01|
##### Table 59: Statistical results for ECQA using Llama-3.1- 8B on irrelevant responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
None 6.75 5.46 4.91 5.54
_sQ_ Simplified 1.52 1.27 0.80 1.15
Detailed 0.96 0.51 0.42 0.60
None 3.16 2.03 3.97 3.10
_sK_ Simplified 0.67 0.52 0.32 0.49
Detailed 0.36 0.21 0.22 0.26
None 31.92 9.46 4.11 13.71
_sV_ Simplified 6.64 1.63 0.94 2.76
Detailed 2.65 0.85 0.67 1.29
None 9.59 8.89 3.67 7.20
CREAK _sO_ Simplified 2.17 1.58 1.29 1.60
Detailed 1.35 1.05 0.93 1.07
None 0.02 0.07 0.11 0.07
_rQ_ Simplified 0.02 0.01 0.05 0.03
Detailed 0.01 0.01 0.06 0.03
None 0.02 0.03 0.06 0.04
_rK_ Simplified 0.02 0.02 0.02 0.02
Detailed 0.01 0.01 0.03 0.02
None 0.02 0.04 0.04 0.04
_rV_ Simplified 0.02 0.02 0.02 0.02
Detailed 0.02 0.01 0.01 0.01
None 0.02 0.04 0.06 0.05
_rO_ Simplified 0.01 0.02 0.02 0.02
Detailed 0.01 0.01 0.01 0.01
##### Table 60: Statistical results for CREAK using Llama- 3.1-8B on irrelevant responses.
##### Table 61: Statistical results for Sensemaking using Llama-3.1-8B on irrelevant responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
Len 100 1.21 0.85 1.02 1.02
_sQ_ Len 500 0.99 0.61 0.65 0.73
Len 1000 0.81 0.53 0.48 0.59
Unpopular 1.53 1.00 1.08 1.18
Len 100 0.57 0.27 0.32 0.39
_sK_ Len 500 0.44 0.20 0.24 0.29
Len 1000 0.37 0.17 0.19 0.24
Unpopular 0.75 0.30 0.41 0.49
Len 100 6.03 1.13 0.84 2.47
_sV_ Len 500 3.84 0.97 0.76 1.71
Len 1000 3.06 0.86 0.67 1.41
Unpopular 11.89 1.57 0.88 4.36
Len 100 2.61 1.20 1.03 1.58
_sO_ Len 500 2.25 1.11 0.97 1.40
Wiki Len 1000 2.11 1.00 0.89 1.29
Unpopular 4.04 1.44 1.22 2.18
Len 100 0.02 0.01 0.06 0.03
_rQ_ Len 500 0.02 0.01 0.06 0.03
Len 1000 0.02 0.01 0.05 0.03
Unpopular 0.02 0.01 0.05 0.03
Len 100 0.01 0.01 0.01 0.01
_rK_ Len 500 0.01 0.01 0.01 0.01
Len 1000 0.01 0.00 0.01 0.01
Unpopular 0.02 0.01 0.01 0.01
Len 100 0.03 0.02 0.01 0.02
_rV_ Len 500 0.02 0.01 0.01 0.02
Len 1000 0.02 0.01 0.01 0.01
Unpopular 0.04 0.02 0.03 0.03
Len 100 0.01 0.02 0.02 0.02
_rO_ Len 500 0.01 0.01 0.01 0.01
Len 1000 0.01 0.01 0.01 0.01
Unpopular 0.01 0.03 0.01 0.02
##### Table 62: Statistical results for Wiki using Llama-3.1- 8B on irrelevant responses.
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 6.75 5.46 4.91 5.54 sQ Simplified 1.52 1.27 0.80 1.15 Detailed 0.96 0.51 0.42 0.60 None 3.16 2.03 3.97 3.10 sK Simplified 0.67 0.52 0.32 0.49 Detailed 0.36 0.21 0.22 0.26 None 31.92 9.46 4.11 13.71 sV Simplified 6.64 1.63 0.94 2.76 Detailed 2.65 0.85 0.67 1.29 None 9.59 8.89 3.67 7.20 CREAK sO Simplified 2.17 1.58 1.29 1.60 Detailed 1.35 1.05 0.93 1.07 None 0.02 0.07 0.11 0.07 rQ Simplified 0.02 0.01 0.05 0.03 Detailed 0.01 0.01 0.06 0.03 None 0.02 0.03 0.06 0.04 rK Simplified 0.02 0.02 0.02 0.02 Detailed 0.01 0.01 0.03 0.02 None 0.02 0.04 0.04 0.04 rV Simplified 0.02 0.02 0.02 0.02 Detailed 0.02 0.01 0.01 0.01 None 0.02 0.04 0.06 0.05 rO Simplified 0.01 0.02 0.02 0.02 Detailed 0.01 0.01 0.01 0.01|sQ|None Simplified Detailed|6.75 5.46 4.91 5.54 1.52 1.27 0.80 1.15 0.96 0.51 0.42 0.60|
||sK|None Simplified Detailed|3.16 2.03 3.97 3.10 0.67 0.52 0.32 0.49 0.36 0.21 0.22 0.26|
||sV|None Simplified Detailed|31.92 9.46 4.11 13.71 6.64 1.63 0.94 2.76 2.65 0.85 0.67 1.29|
||sO|None Simplified Detailed|9.59 8.89 3.67 7.20 2.17 1.58 1.29 1.60 1.35 1.05 0.93 1.07|
||rQ|None Simplified Detailed|0.02 0.07 0.11 0.07 0.02 0.01 0.05 0.03 0.01 0.01 0.06 0.03|
||rK|None Simplified Detailed|0.02 0.03 0.06 0.04 0.02 0.02 0.02 0.02 0.01 0.01 0.03 0.02|
||rV|None Simplified Detailed|0.02 0.04 0.04 0.04 0.02 0.02 0.02 0.02 0.02 0.01 0.01 0.01|
||rO|None Simplified Detailed|0.02 0.04 0.06 0.05 0.01 0.02 0.02 0.02 0.01 0.01 0.01 0.01|
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|Len 100 1.21 0.85 1.02 1.02 sQ Len 500 0.99 0.61 0.65 0.73 Len 1000 0.81 0.53 0.48 0.59 Unpopular 1.53 1.00 1.08 1.18 Len 100 0.57 0.27 0.32 0.39 sK Len 500 0.44 0.20 0.24 0.29 Len 1000 0.37 0.17 0.19 0.24 Unpopular 0.75 0.30 0.41 0.49 Len 100 6.03 1.13 0.84 2.47 sV Len 500 3.84 0.97 0.76 1.71 Len 1000 3.06 0.86 0.67 1.41 Unpopular 11.89 1.57 0.88 4.36 Len 100 2.61 1.20 1.03 1.58 sO Len 500 2.25 1.11 0.97 1.40 Wiki Len 1000 2.11 1.00 0.89 1.29 Unpopular 4.04 1.44 1.22 2.18 Len 100 0.02 0.01 0.06 0.03 rQ Len 500 0.02 0.01 0.06 0.03 Len 1000 0.02 0.01 0.05 0.03 Unpopular 0.02 0.01 0.05 0.03 Len 100 0.01 0.01 0.01 0.01 rK Len 500 0.01 0.01 0.01 0.01 Len 1000 0.01 0.00 0.01 0.01 Unpopular 0.02 0.01 0.01 0.01 Len 100 0.03 0.02 0.01 0.02 rV Len 500 0.02 0.01 0.01 0.02 Len 1000 0.02 0.01 0.01 0.01 Unpopular 0.04 0.02 0.03 0.03 Len 100 0.01 0.02 0.02 0.02 rO Len 500 0.01 0.01 0.01 0.01 Len 1000 0.01 0.01 0.01 0.01 Unpopular 0.01 0.03 0.01 0.02|sQ|Len 100 Len 500 Len 1000 Unpopular|1.21 0.85 1.02 1.02 0.99 0.61 0.65 0.73 0.81 0.53 0.48 0.59 1.53 1.00 1.08 1.18|
||sK|Len 100 Len 500 Len 1000 Unpopular|0.57 0.27 0.32 0.39 0.44 0.20 0.24 0.29 0.37 0.17 0.19 0.24 0.75 0.30 0.41 0.49|
||sV|Len 100 Len 500 Len 1000 Unpopular|6.03 1.13 0.84 2.47 3.84 0.97 0.76 1.71 3.06 0.86 0.67 1.41 11.89 1.57 0.88 4.36|
||sO|Len 100 Len 500 Len 1000 Unpopular|2.61 1.20 1.03 1.58 2.25 1.11 0.97 1.40 2.11 1.00 0.89 1.29 4.04 1.44 1.22 2.18|
||rQ|Len 100 Len 500 Len 1000 Unpopular|0.02 0.01 0.06 0.03 0.02 0.01 0.06 0.03 0.02 0.01 0.05 0.03 0.02 0.01 0.05 0.03|
||rK|Len 100 Len 500 Len 1000 Unpopular|0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.00 0.01 0.01 0.02 0.01 0.01 0.01|
||rV|Len 100 Len 500 Len 1000 Unpopular|0.03 0.02 0.01 0.02 0.02 0.01 0.01 0.02 0.02 0.01 0.01 0.01 0.04 0.02 0.03 0.03|
||rO|Len 100 Len 500 Len 1000 Unpopular|0.01 0.02 0.02 0.02 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.03 0.01 0.02|
-----
#### C.3 Instructed LLM on Correct Responses
C.3.1 Reasoning Tasks
The visualizations and statistical results on MATH tasks: MATH-Algebra (Figure 155, Table 63), MATH-Counting (Figure 156, Table 64), MATH- Geometry (Figure 157, Table 65).
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
Simplified 0.74 0.89 0.33 0.61
_sQ_ Detailed 0.34 0.43 0.20 0.30
Simplified 0.36 0.33 0.11 0.27
_sK_ Detailed 0.16 0.12 0.07 0.11
Simplified 2.20 1.30 0.49 1.22
_sV_ Detailed 0.77 0.55 0.27 0.50
Algebra Simplified 1.15 1.50 0.55 0.98
_sO_ Detailed 0.51 0.70 0.32 0.48
Simplified 0.01 0.01 0.05 0.03
_rQ_ Detailed 0.01 0.01 0.07 0.03
Simplified 0.02 0.01 0.02 0.02
_rK_ Detailed 0.02 0.00 0.03 0.02
Simplified 0.02 0.01 0.02 0.02
_rV_ Detailed 0.02 0.01 0.02 0.02
Simplified 0.01 0.01 0.01 0.01
_rO_ Detailed 0.01 0.01 0.01 0.01
##### Table 63: Statistical results for MATH-Algebra using Llama-3.1-8B-Instruct on correct responses.
Figure 75: Visualization for MATH-Algebra using Llama-3.1-8B-Instruct on correct responses.
#### The visualizations and statistical results on other reasoning tasks: AQuA (Figure 158, Table 66), GSM8K (Figure 159, Table 67), StrategyQA (Fig- ure 160, Table 68), ECQA (Figure 161, Table 69), CREAK (Figure 162, Table 70), Sensemaking (Fig- ure 163, Table 71).
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|Simplified 0.91 1.00 0.40 0.72 sQ Detailed 0.41 0.48 0.21 0.34 Simplified 0.42 0.36 0.10 0.29 sK Detailed 0.18 0.14 0.06 0.12 Simplified 2.32 1.43 0.54 1.30 sV Detailed 0.85 0.63 0.34 0.57 Counting Simplified 1.21 1.73 0.58 1.08 sO Detailed 0.56 0.86 0.40 0.56 Simplified 0.01 0.01 0.06 0.03 rQ Detailed 0.01 0.01 0.08 0.03 Simplified 0.02 0.01 0.02 0.02 rK Detailed 0.02 0.00 0.02 0.01 Simplified 0.02 0.01 0.01 0.02 rV Detailed 0.02 0.01 0.02 0.02 Simplified 0.01 0.01 0.01 0.01 rO Detailed 0.01 0.01 0.01 0.01|sQ|Simplified Detailed|0.91 1.00 0.40 0.72 0.41 0.48 0.21 0.34|
||sK|Simplified Detailed|0.42 0.36 0.10 0.29 0.18 0.14 0.06 0.12|
||sV|Simplified Detailed|2.32 1.43 0.54 1.30 0.85 0.63 0.34 0.57|
||sO|Simplified Detailed|1.21 1.73 0.58 1.08 0.56 0.86 0.40 0.56|
||rQ|Simplified Detailed|0.01 0.01 0.06 0.03 0.01 0.01 0.08 0.03|
||rK|Simplified Detailed|0.02 0.01 0.02 0.02 0.02 0.00 0.02 0.01|
||rV|Simplified Detailed|0.02 0.01 0.01 0.02 0.02 0.01 0.02 0.02|
||rO|Simplified Detailed|0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01|
##### Table 64: Statistical results for MATH-Counting using Llama-3.1-8B-Instruct on correct responses.
Figure 76: Visualization for MATH-Counting using Llama-3.1-8B-Instruct on correct responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
Simplified 1.07 1.09 0.69 0.89
_sQ_ Detailed 0.45 0.55 0.20 0.37
Simplified 0.48 0.36 0.25 0.34
_sK_ Detailed 0.20 0.15 0.07 0.13
Simplified 2.84 1.63 0.64 1.57
_sV_ Detailed 0.97 0.70 0.34 0.63
Geometry Simplified 1.51 1.77 0.75 1.22
_sO_ Detailed 0.64 0.88 0.43 0.61
Simplified 0.01 0.01 0.07 0.03
_rQ_ Detailed 0.01 0.01 0.06 0.03
Simplified 0.02 0.00 0.02 0.01
_rK_ Detailed 0.01 0.00 0.02 0.01
Simplified 0.02 0.01 0.02 0.02
_rV_ Detailed 0.02 0.01 0.01 0.01
Simplified 0.01 0.01 0.01 0.01
_rO_ Detailed 0.01 0.01 0.01 0.01
##### Table 65: Statistical results for MATH-Geometry using Llama-3.1-8B-Instruct on correct responses.
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|Simplified 0.74 0.89 0.33 0.61 sQ Detailed 0.34 0.43 0.20 0.30 Simplified 0.36 0.33 0.11 0.27 sK Detailed 0.16 0.12 0.07 0.11 Simplified 2.20 1.30 0.49 1.22 sV Detailed 0.77 0.55 0.27 0.50 Algebra Simplified 1.15 1.50 0.55 0.98 sO Detailed 0.51 0.70 0.32 0.48 Simplified 0.01 0.01 0.05 0.03 rQ Detailed 0.01 0.01 0.07 0.03 Simplified 0.02 0.01 0.02 0.02 rK Detailed 0.02 0.00 0.03 0.02 Simplified 0.02 0.01 0.02 0.02 rV Detailed 0.02 0.01 0.02 0.02 Simplified 0.01 0.01 0.01 0.01 rO Detailed 0.01 0.01 0.01 0.01|sQ|Simplified Detailed|0.74 0.89 0.33 0.61 0.34 0.43 0.20 0.30|
||sK|Simplified Detailed|0.36 0.33 0.11 0.27 0.16 0.12 0.07 0.11|
||sV|Simplified Detailed|2.20 1.30 0.49 1.22 0.77 0.55 0.27 0.50|
||sO|Simplified Detailed|1.15 1.50 0.55 0.98 0.51 0.70 0.32 0.48|
||rQ|Simplified Detailed|0.01 0.01 0.05 0.03 0.01 0.01 0.07 0.03|
||rK|Simplified Detailed|0.02 0.01 0.02 0.02 0.02 0.00 0.03 0.02|
||rV|Simplified Detailed|0.02 0.01 0.02 0.02 0.02 0.01 0.02 0.02|
||rO|Simplified Detailed|0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01|
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|Simplified 1.07 1.09 0.69 0.89 sQ Detailed 0.45 0.55 0.20 0.37 Simplified 0.48 0.36 0.25 0.34 sK Detailed 0.20 0.15 0.07 0.13 Simplified 2.84 1.63 0.64 1.57 sV Detailed 0.97 0.70 0.34 0.63 Geometry Simplified 1.51 1.77 0.75 1.22 sO Detailed 0.64 0.88 0.43 0.61 Simplified 0.01 0.01 0.07 0.03 rQ Detailed 0.01 0.01 0.06 0.03 Simplified 0.02 0.00 0.02 0.01 rK Detailed 0.01 0.00 0.02 0.01 Simplified 0.02 0.01 0.02 0.02 rV Detailed 0.02 0.01 0.01 0.01 Simplified 0.01 0.01 0.01 0.01 rO Detailed 0.01 0.01 0.01 0.01|sQ|Simplified Detailed|1.07 1.09 0.69 0.89 0.45 0.55 0.20 0.37|
||sK|Simplified Detailed|0.48 0.36 0.25 0.34 0.20 0.15 0.07 0.13|
||sV|Simplified Detailed|2.84 1.63 0.64 1.57 0.97 0.70 0.34 0.63|
||sO|Simplified Detailed|1.51 1.77 0.75 1.22 0.64 0.88 0.43 0.61|
||rQ|Simplified Detailed|0.01 0.01 0.07 0.03 0.01 0.01 0.06 0.03|
||rK|Simplified Detailed|0.02 0.00 0.02 0.01 0.01 0.00 0.02 0.01|
||rV|Simplified Detailed|0.02 0.01 0.02 0.02 0.02 0.01 0.01 0.01|
||rO|Simplified Detailed|0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01|
-----
##### Figure 77: Visualization for MATH-Geometry using Llama-3.1-8B-Instruct on correct responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
None 19.05 15.54 10.80 14.54
_sQ_ Simplified 2.64 2.78 1.42 2.12
Detailed 0.43 0.52 0.29 0.38
None 10.39 11.00 3.13 7.80
_sK_ Simplified 1.40 1.26 0.65 1.05
Detailed 0.20 0.16 0.09 0.14
None 63.48 30.54 8.67 30.72
_sV_ Simplified 8.26 4.05 1.24 4.07
Detailed 1.04 0.65 0.36 0.65
None 25.90 35.39 7.55 21.70
AQuA _sO_ Simplified 3.62 4.87 1.26 3.00
Detailed 0.63 0.94 0.42 0.62
None 0.02 0.07 0.10 0.06
_rQ_ Simplified 0.01 0.03 0.07 0.04
Detailed 0.01 0.01 0.09 0.04
None 0.03 0.04 0.06 0.04
_rK_ Simplified 0.03 0.03 0.03 0.03
Detailed 0.02 0.01 0.03 0.02
None 0.02 0.03 0.03 0.03
_rV_ Simplified 0.02 0.02 0.02 0.02
Detailed 0.02 0.02 0.02 0.02
None 0.02 0.04 0.08 0.05
_rO_ Simplified 0.01 0.02 0.02 0.02
Detailed 0.01 0.01 0.01 0.01
##### Table 66: Statistical results for AQuA using Llama-3.1- 8B-Instruct on correct responses.
#### C.3.2 Wiki Tasks
The visualizations and statistical results on Wiki tasks are shown in Figure 164 and Table 72.
C.4 Instructed LLM on Irrelevant Responses
C.4.1 Reasoning Tasks
The visualizations and statistical results on MATH tasks: MATH-Algebra (Figure 165, Table 73), MATH-Counting (Figure 166, Table 74), MATH- Geometry (Figure 167, Table 75). The visualizations and statistical results on other
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 16.19 19.18 15.89 16.45 sQ Simplified 1.30 1.53 1.04 1.20 Detailed 0.42 0.51 0.34 0.40 None 9.68 11.73 5.44 8.58 sK Simplified 0.68 0.65 0.30 0.51 Detailed 0.19 0.17 0.09 0.14 None 62.84 37.06 13.72 34.67 sV Simplified 3.26 2.12 0.73 1.89 Detailed 1.00 0.67 0.38 0.64 None 27.31 42.87 11.38 25.32 GSM8K sO Simplified 1.75 2.78 0.78 1.61 Detailed 0.59 1.04 0.42 0.63 None 0.01 0.03 0.08 0.04 rQ Simplified 0.01 0.03 0.10 0.05 Detailed 0.01 0.02 0.10 0.05 None 0.03 0.04 0.05 0.04 rK Simplified 0.03 0.04 0.03 0.03 Detailed 0.02 0.02 0.03 0.02 None 0.02 0.03 0.03 0.03 rV Simplified 0.02 0.02 0.03 0.03 Detailed 0.02 0.02 0.02 0.02 None 0.02 0.04 0.08 0.05 rO Simplified 0.01 0.02 0.03 0.02 Detailed 0.01 0.02 0.02 0.01|sQ|None Simplified Detailed|16.19 19.18 15.89 16.45 1.30 1.53 1.04 1.20 0.42 0.51 0.34 0.40|
||sK|None Simplified Detailed|9.68 11.73 5.44 8.58 0.68 0.65 0.30 0.51 0.19 0.17 0.09 0.14|
||sV|None Simplified Detailed|62.84 37.06 13.72 34.67 3.26 2.12 0.73 1.89 1.00 0.67 0.38 0.64|
||sO|None Simplified Detailed|27.31 42.87 11.38 25.32 1.75 2.78 0.78 1.61 0.59 1.04 0.42 0.63|
||rQ|None Simplified Detailed|0.01 0.03 0.08 0.04 0.01 0.03 0.10 0.05 0.01 0.02 0.10 0.05|
||rK|None Simplified Detailed|0.03 0.04 0.05 0.04 0.03 0.04 0.03 0.03 0.02 0.02 0.03 0.02|
||rV|None Simplified Detailed|0.02 0.03 0.03 0.03 0.02 0.02 0.03 0.03 0.02 0.02 0.02 0.02|
||rO|None Simplified Detailed|0.02 0.04 0.08 0.05 0.01 0.02 0.03 0.02 0.01 0.02 0.02 0.01|
##### Table 67: Statistical results for GSM8K using Llama- 3.1-8B-Instruct on correct responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
None 13.26 18.96 5.16 11.85
_sQ_ Simplified 1.85 3.19 1.54 2.05
Detailed 0.49 0.68 0.31 0.45
None 7.14 7.75 4.61 6.34
_sK_ Simplified 0.91 1.33 0.39 0.82
Detailed 0.20 0.18 0.09 0.15
None 58.20 27.11 11.06 29.17
_sV_ Simplified 6.78 3.32 0.95 3.36
Detailed 1.15 0.82 0.56 0.80
None 23.40 27.11 7.11 17.76
StrategyQA _sO_ Simplified 3.26 3.89 1.32 2.60
Detailed 0.76 1.14 0.73 0.82
None 0.01 0.04 0.09 0.05
_rQ_ Simplified 0.01 0.04 0.08 0.05
Detailed 0.01 0.01 0.09 0.04
None 0.02 0.02 0.03 0.02
_rK_ Simplified 0.02 0.04 0.02 0.03
Detailed 0.02 0.01 0.02 0.01
None 0.03 0.03 0.03 0.03
_rV_ Simplified 0.02 0.02 0.02 0.02
Detailed 0.02 0.01 0.02 0.02
None 0.02 0.04 0.08 0.05
_rO_ Simplified 0.02 0.02 0.02 0.02
Detailed 0.01 0.01 0.01 0.01
##### Table 68: Statistical results for StrategyQA using Llama- 3.1-8B-Instruct on correct responses.
#### reasoning tasks: AQuA (Figure 168, Table 76), GSM8K (Figure 169, Table 77), StrategyQA (Fig- ure 170, Table 78), ECQA (Figure 171, Table 79), CREAK (Figure 172, Table 80), Sensemaking (Fig- ure 173, Table 81).
C.4.2 Wiki Tasks The visualizations and statistical results on Wiki tasks are shown in Figure 174 and Table 82.
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 19.05 15.54 10.80 14.54 sQ Simplified 2.64 2.78 1.42 2.12 Detailed 0.43 0.52 0.29 0.38 None 10.39 11.00 3.13 7.80 sK Simplified 1.40 1.26 0.65 1.05 Detailed 0.20 0.16 0.09 0.14 None 63.48 30.54 8.67 30.72 sV Simplified 8.26 4.05 1.24 4.07 Detailed 1.04 0.65 0.36 0.65 None 25.90 35.39 7.55 21.70 AQuA sO Simplified 3.62 4.87 1.26 3.00 Detailed 0.63 0.94 0.42 0.62 None 0.02 0.07 0.10 0.06 rQ Simplified 0.01 0.03 0.07 0.04 Detailed 0.01 0.01 0.09 0.04 None 0.03 0.04 0.06 0.04 rK Simplified 0.03 0.03 0.03 0.03 Detailed 0.02 0.01 0.03 0.02 None 0.02 0.03 0.03 0.03 rV Simplified 0.02 0.02 0.02 0.02 Detailed 0.02 0.02 0.02 0.02 None 0.02 0.04 0.08 0.05 rO Simplified 0.01 0.02 0.02 0.02 Detailed 0.01 0.01 0.01 0.01||||
||sK|None Simplified Detailed|10.39 11.00 3.13 7.80 1.40 1.26 0.65 1.05 0.20 0.16 0.09 0.14|
||sV|None Simplified Detailed|63.48 30.54 8.67 30.72 8.26 4.05 1.24 4.07 1.04 0.65 0.36 0.65|
||sO|None Simplified Detailed|25.90 35.39 7.55 21.70 3.62 4.87 1.26 3.00 0.63 0.94 0.42 0.62|
||rQ|None Simplified Detailed|0.02 0.07 0.10 0.06 0.01 0.03 0.07 0.04 0.01 0.01 0.09 0.04|
||rK|None Simplified Detailed|0.03 0.04 0.06 0.04 0.03 0.03 0.03 0.03 0.02 0.01 0.03 0.02|
||rV|None Simplified Detailed|0.02 0.03 0.03 0.03 0.02 0.02 0.02 0.02 0.02 0.02 0.02 0.02|
||rO|None Simplified Detailed|0.02 0.04 0.08 0.05 0.01 0.02 0.02 0.02 0.01 0.01 0.01 0.01|
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 13.26 18.96 5.16 11.85 sQ Simplified 1.85 3.19 1.54 2.05 Detailed 0.49 0.68 0.31 0.45 None 7.14 7.75 4.61 6.34 sK Simplified 0.91 1.33 0.39 0.82 Detailed 0.20 0.18 0.09 0.15 None 58.20 27.11 11.06 29.17 sV Simplified 6.78 3.32 0.95 3.36 Detailed 1.15 0.82 0.56 0.80 None 23.40 27.11 7.11 17.76 StrategyQA sO Simplified 3.26 3.89 1.32 2.60 Detailed 0.76 1.14 0.73 0.82 None 0.01 0.04 0.09 0.05 rQ Simplified 0.01 0.04 0.08 0.05 Detailed 0.01 0.01 0.09 0.04 None 0.02 0.02 0.03 0.02 rK Simplified 0.02 0.04 0.02 0.03 Detailed 0.02 0.01 0.02 0.01 None 0.03 0.03 0.03 0.03 rV Simplified 0.02 0.02 0.02 0.02 Detailed 0.02 0.01 0.02 0.02 None 0.02 0.04 0.08 0.05 rO Simplified 0.02 0.02 0.02 0.02 Detailed 0.01 0.01 0.01 0.01|sQ|None Simplified Detailed|13.26 18.96 5.16 11.85 1.85 3.19 1.54 2.05 0.49 0.68 0.31 0.45|
||sK|None Simplified Detailed|7.14 7.75 4.61 6.34 0.91 1.33 0.39 0.82 0.20 0.18 0.09 0.15|
||sV|None Simplified Detailed|58.20 27.11 11.06 29.17 6.78 3.32 0.95 3.36 1.15 0.82 0.56 0.80|
||sO|None Simplified Detailed|23.40 27.11 7.11 17.76 3.26 3.89 1.32 2.60 0.76 1.14 0.73 0.82|
||rQ|None Simplified Detailed|0.01 0.04 0.09 0.05 0.01 0.04 0.08 0.05 0.01 0.01 0.09 0.04|
||rK|None Simplified Detailed|0.02 0.02 0.03 0.02 0.02 0.04 0.02 0.03 0.02 0.01 0.02 0.01|
||rV|None Simplified Detailed|0.03 0.03 0.03 0.03 0.02 0.02 0.02 0.02 0.02 0.01 0.02 0.02|
||rO|None Simplified Detailed|0.02 0.04 0.08 0.05 0.02 0.02 0.02 0.02 0.01 0.01 0.01 0.01|
-----
##### Figure 78: Visualization for AQuA using Llama-3.1-8B-Instruct on correct responses.
Figure 79: Visualization for GSM8K using Llama-3.1-8B-Instruct on correct responses.
-----
##### Figure 80: Visualization for StrategyQA using Llama-3.1-8B-Instruct on correct responses.
Figure 81: Visualization for ECQA using Llama-3.1-8B-Instruct on correct responses.
-----
##### Figure 82: Visualization for CREAK using Llama-3.1-8B-Instruct on correct responses.
Figure 83: Visualization for Sensemaking using Llama-3.1-8B-Instruct on correct responses.
-----
##### Figure 84: Visualization for Wiki tasks using Llama-3.1-8B-Instruct on correct responses.
-----
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 3.00 3.45 1.11 2.35 sQ Simplified 3.50 3.96 2.41 3.14 Detailed 0.62 0.76 0.34 0.53 None 1.54 1.18 1.46 1.43 sK Simplified 1.78 1.69 1.10 1.48 Detailed 0.26 0.22 0.10 0.18 None 8.05 4.41 3.69 5.14 sV Simplified 8.79 5.79 2.31 5.28 Detailed 1.29 0.88 0.57 0.87 None 4.33 6.16 2.23 4.04 Sensemaking sO Simplified 4.32 7.21 1.83 4.20 Detailed 0.82 1.33 0.70 0.88 None 0.01 0.06 0.12 0.07 rQ Simplified 0.01 0.03 0.09 0.04 Detailed 0.01 0.01 0.10 0.04 None 0.02 0.03 0.06 0.04 rK Simplified 0.02 0.04 0.04 0.03 Detailed 0.02 0.01 0.02 0.02 None 0.02 0.03 0.04 0.03 rV Simplified 0.02 0.02 0.02 0.02 Detailed 0.02 0.01 0.02 0.02 None 0.01 0.06 0.09 0.05 rO Simplified 0.01 0.02 0.03 0.02 Detailed 0.01 0.01 0.02 0.01|sQ|None Simplified Detailed|3.00 3.45 1.11 2.35 3.50 3.96 2.41 3.14 0.62 0.76 0.34 0.53|
||sK|None Simplified Detailed|1.54 1.18 1.46 1.43 1.78 1.69 1.10 1.48 0.26 0.22 0.10 0.18|
||sV|None Simplified Detailed|8.05 4.41 3.69 5.14 8.79 5.79 2.31 5.28 1.29 0.88 0.57 0.87|
||sO|None Simplified Detailed|4.33 6.16 2.23 4.04 4.32 7.21 1.83 4.20 0.82 1.33 0.70 0.88|
||rQ|None Simplified Detailed|0.01 0.06 0.12 0.07 0.01 0.03 0.09 0.04 0.01 0.01 0.10 0.04|
||rK|None Simplified Detailed|0.02 0.03 0.06 0.04 0.02 0.04 0.04 0.03 0.02 0.01 0.02 0.02|
||rV|None Simplified Detailed|0.02 0.03 0.04 0.03 0.02 0.02 0.02 0.02 0.02 0.01 0.02 0.02|
||rO|None Simplified Detailed|0.01 0.06 0.09 0.05 0.01 0.02 0.03 0.02 0.01 0.01 0.02 0.01|
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 16.08 16.53 5.36 11.91 sQ Simplified 3.02 3.43 1.70 2.55 Detailed 0.67 0.81 0.46 0.60 None 7.86 8.50 6.50 7.53 sK Simplified 1.47 1.54 0.77 1.21 Detailed 0.28 0.22 0.11 0.20 None 48.82 32.86 11.34 28.05 sV Simplified 7.47 4.70 1.48 4.19 Detailed 1.40 1.00 0.62 0.96 None 20.81 34.23 6.08 19.08 ECQA sO Simplified 3.46 5.65 1.38 3.24 Detailed 0.85 1.40 0.74 0.93 None 0.01 0.05 0.11 0.06 rQ Simplified 0.01 0.04 0.09 0.05 Detailed 0.01 0.01 0.12 0.05 None 0.02 0.02 0.05 0.03 rK Simplified 0.03 0.03 0.03 0.03 Detailed 0.02 0.01 0.02 0.02 None 0.02 0.02 0.02 0.02 rV Simplified 0.02 0.02 0.02 0.02 Detailed 0.01 0.01 0.02 0.02 None 0.02 0.04 0.07 0.04 rO Simplified 0.01 0.02 0.03 0.02 Detailed 0.01 0.01 0.02 0.01|sQ|None Simplified Detailed|16.08 16.53 5.36 11.91 3.02 3.43 1.70 2.55 0.67 0.81 0.46 0.60|
||sK|None Simplified Detailed|7.86 8.50 6.50 7.53 1.47 1.54 0.77 1.21 0.28 0.22 0.11 0.20|
||sV|None Simplified Detailed|48.82 32.86 11.34 28.05 7.47 4.70 1.48 4.19 1.40 1.00 0.62 0.96|
||sO|None Simplified Detailed|20.81 34.23 6.08 19.08 3.46 5.65 1.38 3.24 0.85 1.40 0.74 0.93|
||rQ|None Simplified Detailed|0.01 0.05 0.11 0.06 0.01 0.04 0.09 0.05 0.01 0.01 0.12 0.05|
||rK|None Simplified Detailed|0.02 0.02 0.05 0.03 0.03 0.03 0.03 0.03 0.02 0.01 0.02 0.02|
||rV|None Simplified Detailed|0.02 0.02 0.02 0.02 0.02 0.02 0.02 0.02 0.01 0.01 0.02 0.02|
||rO|None Simplified Detailed|0.02 0.04 0.07 0.04 0.01 0.02 0.03 0.02 0.01 0.01 0.02 0.01|
##### Table 69: Statistical results for ECQA using Llama-3.1- 8B-Instruct on correct responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
None 21.99 25.06 9.00 17.79
_sQ_ Simplified 3.10 4.30 2.72 3.21
Detailed 0.60 0.82 0.42 0.57
None 12.46 15.76 9.76 12.24
_sK_ Simplified 1.48 1.73 1.17 1.41
Detailed 0.24 0.23 0.11 0.18
None 95.66 49.82 28.15 53.01
_sV_ Simplified 9.71 4.98 1.85 5.06
Detailed 1.45 0.97 0.62 0.96
None 36.06 50.58 14.81 31.75
CREAK _sO_ Simplified 4.36 6.26 1.66 3.79
Detailed 0.85 1.35 0.75 0.91
None 0.01 0.04 0.10 0.05
_rQ_ Simplified 0.01 0.04 0.10 0.05
Detailed 0.01 0.01 0.10 0.04
None 0.03 0.03 0.06 0.04
_rK_ Simplified 0.02 0.03 0.03 0.02
Detailed 0.02 0.01 0.02 0.02
None 0.02 0.02 0.03 0.02
_rV_ Simplified 0.02 0.02 0.02 0.02
Detailed 0.02 0.01 0.02 0.02
None 0.02 0.04 0.08 0.05
_rO_ Simplified 0.02 0.02 0.03 0.02
Detailed 0.01 0.01 0.01 0.01
##### Table 70: Statistical results for CREAK using Llama- 3.1-8B-Instruct on correct responses.
##### Table 71: Statistical results for Sensemaking using Llama-3.1-8B-Instruct on correct responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
Len 100 1.03 1.69 1.25 1.28
_sQ_ Len 500 0.83 1.10 0.59 0.79
Len 1000 0.72 0.90 0.38 0.62
Unpopular 1.36 1.65 0.99 1.29
Len 100 0.47 0.51 0.29 0.43
_sK_ Len 500 0.37 0.32 0.15 0.28
Len 1000 0.33 0.26 0.10 0.23
Unpopular 0.60 0.65 0.34 0.55
Len 100 4.98 1.52 0.81 2.23
_sV_ Len 500 3.01 1.17 0.64 1.46
Len 1000 2.38 0.98 0.55 1.19
Unpopular 12.54 2.64 0.75 4.80
Len 100 2.44 2.12 1.12 1.82
_sO_ Len 500 1.80 1.49 0.87 1.31
Wiki Len 1000 1.52 1.26 0.77 1.11
Unpopular 4.38 2.57 0.99 2.50
Len 100 0.02 0.02 0.08 0.04
_rQ_ Len 500 0.02 0.02 0.07 0.04
Len 1000 0.02 0.02 0.06 0.03
Unpopular 0.03 0.02 0.05 0.03
Len 100 0.02 0.02 0.01 0.02
_rK_ Len 500 0.02 0.02 0.01 0.02
Len 1000 0.02 0.01 0.01 0.02
Unpopular 0.03 0.03 0.02 0.03
Len 100 0.03 0.02 0.02 0.02
_rV_ Len 500 0.03 0.01 0.01 0.02
Len 1000 0.02 0.01 0.01 0.02
Unpopular 0.05 0.02 0.03 0.03
Len 100 0.01 0.03 0.02 0.02
_rO_ Len 500 0.01 0.02 0.02 0.01
Len 1000 0.01 0.01 0.01 0.01
Unpopular 0.01 0.03 0.02 0.02
##### Table 72: Statistical results for Wiki using Llama-3.1- 8B-Instruct on correct responses.
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 21.99 25.06 9.00 17.79 sQ Simplified 3.10 4.30 2.72 3.21 Detailed 0.60 0.82 0.42 0.57 None 12.46 15.76 9.76 12.24 sK Simplified 1.48 1.73 1.17 1.41 Detailed 0.24 0.23 0.11 0.18 None 95.66 49.82 28.15 53.01 sV Simplified 9.71 4.98 1.85 5.06 Detailed 1.45 0.97 0.62 0.96 None 36.06 50.58 14.81 31.75 CREAK sO Simplified 4.36 6.26 1.66 3.79 Detailed 0.85 1.35 0.75 0.91 None 0.01 0.04 0.10 0.05 rQ Simplified 0.01 0.04 0.10 0.05 Detailed 0.01 0.01 0.10 0.04 None 0.03 0.03 0.06 0.04 rK Simplified 0.02 0.03 0.03 0.02 Detailed 0.02 0.01 0.02 0.02 None 0.02 0.02 0.03 0.02 rV Simplified 0.02 0.02 0.02 0.02 Detailed 0.02 0.01 0.02 0.02 None 0.02 0.04 0.08 0.05 rO Simplified 0.02 0.02 0.03 0.02 Detailed 0.01 0.01 0.01 0.01|sQ|None Simplified Detailed|21.99 25.06 9.00 17.79 3.10 4.30 2.72 3.21 0.60 0.82 0.42 0.57|
||sK|None Simplified Detailed|12.46 15.76 9.76 12.24 1.48 1.73 1.17 1.41 0.24 0.23 0.11 0.18|
||sV|None Simplified Detailed|95.66 49.82 28.15 53.01 9.71 4.98 1.85 5.06 1.45 0.97 0.62 0.96|
||sO|None Simplified Detailed|36.06 50.58 14.81 31.75 4.36 6.26 1.66 3.79 0.85 1.35 0.75 0.91|
||rQ|None Simplified Detailed|0.01 0.04 0.10 0.05 0.01 0.04 0.10 0.05 0.01 0.01 0.10 0.04|
||rK|None Simplified Detailed|0.03 0.03 0.06 0.04 0.02 0.03 0.03 0.02 0.02 0.01 0.02 0.02|
||rV|None Simplified Detailed|0.02 0.02 0.03 0.02 0.02 0.02 0.02 0.02 0.02 0.01 0.02 0.02|
||rO|None Simplified Detailed|0.02 0.04 0.08 0.05 0.02 0.02 0.03 0.02 0.01 0.01 0.01 0.01|
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|Len 100 1.03 1.69 1.25 1.28 sQ Len 500 0.83 1.10 0.59 0.79 Len 1000 0.72 0.90 0.38 0.62 Unpopular 1.36 1.65 0.99 1.29 Len 100 0.47 0.51 0.29 0.43 sK Len 500 0.37 0.32 0.15 0.28 Len 1000 0.33 0.26 0.10 0.23 Unpopular 0.60 0.65 0.34 0.55 Len 100 4.98 1.52 0.81 2.23 sV Len 500 3.01 1.17 0.64 1.46 Len 1000 2.38 0.98 0.55 1.19 Unpopular 12.54 2.64 0.75 4.80 Len 100 2.44 2.12 1.12 1.82 sO Len 500 1.80 1.49 0.87 1.31 Wiki Len 1000 1.52 1.26 0.77 1.11 Unpopular 4.38 2.57 0.99 2.50 Len 100 0.02 0.02 0.08 0.04 rQ Len 500 0.02 0.02 0.07 0.04 Len 1000 0.02 0.02 0.06 0.03 Unpopular 0.03 0.02 0.05 0.03 Len 100 0.02 0.02 0.01 0.02 rK Len 500 0.02 0.02 0.01 0.02 Len 1000 0.02 0.01 0.01 0.02 Unpopular 0.03 0.03 0.02 0.03 Len 100 0.03 0.02 0.02 0.02 rV Len 500 0.03 0.01 0.01 0.02 Len 1000 0.02 0.01 0.01 0.02 Unpopular 0.05 0.02 0.03 0.03 Len 100 0.01 0.03 0.02 0.02 rO Len 500 0.01 0.02 0.02 0.01 Len 1000 0.01 0.01 0.01 0.01 Unpopular 0.01 0.03 0.02 0.02|sQ|Len 100 Len 500 Len 1000 Unpopular|1.03 1.69 1.25 1.28 0.83 1.10 0.59 0.79 0.72 0.90 0.38 0.62 1.36 1.65 0.99 1.29|
||sK|Len 100 Len 500 Len 1000 Unpopular|0.47 0.51 0.29 0.43 0.37 0.32 0.15 0.28 0.33 0.26 0.10 0.23 0.60 0.65 0.34 0.55|
||sV|Len 100 Len 500 Len 1000 Unpopular|4.98 1.52 0.81 2.23 3.01 1.17 0.64 1.46 2.38 0.98 0.55 1.19 12.54 2.64 0.75 4.80|
||sO|Len 100 Len 500 Len 1000 Unpopular|2.44 2.12 1.12 1.82 1.80 1.49 0.87 1.31 1.52 1.26 0.77 1.11 4.38 2.57 0.99 2.50|
||rQ|Len 100 Len 500 Len 1000 Unpopular|0.02 0.02 0.08 0.04 0.02 0.02 0.07 0.04 0.02 0.02 0.06 0.03 0.03 0.02 0.05 0.03|
||rK|Len 100 Len 500 Len 1000 Unpopular|0.02 0.02 0.01 0.02 0.02 0.02 0.01 0.02 0.02 0.01 0.01 0.02 0.03 0.03 0.02 0.03|
||rV|Len 100 Len 500 Len 1000 Unpopular|0.03 0.02 0.02 0.02 0.03 0.01 0.01 0.02 0.02 0.01 0.01 0.02 0.05 0.02 0.03 0.03|
||rO|Len 100 Len 500 Len 1000 Unpopular|0.01 0.03 0.02 0.02 0.01 0.02 0.02 0.01 0.01 0.01 0.01 0.01 0.01 0.03 0.02 0.02|
-----
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|Simplified 1.46 1.62 0.75 1.20 sQ Detailed 0.83 0.85 0.30 0.61 Simplified 0.61 0.58 0.21 0.46 sK Detailed 0.34 0.29 0.12 0.24 Simplified 3.80 2.13 0.80 2.03 sV Detailed 1.67 1.10 0.45 0.98 Algebra Simplified 2.11 2.45 0.90 1.72 sO Detailed 0.98 1.31 0.53 0.89 Simplified 0.01 0.02 0.09 0.04 rQ Detailed 0.01 0.01 0.07 0.03 Simplified 0.02 0.01 0.03 0.02 rK Detailed 0.02 0.01 0.02 0.02 Simplified 0.02 0.01 0.02 0.02 rV Detailed 0.02 0.01 0.01 0.02 Simplified 0.01 0.01 0.02 0.01 rO Detailed 0.01 0.01 0.01 0.01|sQ|Simplified Detailed|1.46 1.62 0.75 1.20 0.83 0.85 0.30 0.61|
||sK|Simplified Detailed|0.61 0.58 0.21 0.46 0.34 0.29 0.12 0.24|
||sV|Simplified Detailed|3.80 2.13 0.80 2.03 1.67 1.10 0.45 0.98|
||sO|Simplified Detailed|2.11 2.45 0.90 1.72 0.98 1.31 0.53 0.89|
||rQ|Simplified Detailed|0.01 0.02 0.09 0.04 0.01 0.01 0.07 0.03|
||rK|Simplified Detailed|0.02 0.01 0.03 0.02 0.02 0.01 0.02 0.02|
||rV|Simplified Detailed|0.02 0.01 0.02 0.02 0.02 0.01 0.01 0.02|
||rO|Simplified Detailed|0.01 0.01 0.02 0.01 0.01 0.01 0.01 0.01|
##### Table 73: Statistical results for MATH-Algebra using Llama-3.1-8B-Instruct on irrelevant responses.
Figure 85: Visualization for MATH-Algebra using Llama-3.1-8B-Instruct on irrelevant responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
Simplified 1.62 1.72 0.84 1.32
_sQ_ Detailed 0.88 0.85 0.30 0.62
Simplified 0.61 0.63 0.22 0.47
_sK_ Detailed 0.34 0.28 0.12 0.24
Simplified 3.70 2.20 0.81 2.03
_sV_ Detailed 1.61 1.04 0.45 0.94
Counting Simplified 2.10 2.53 0.87 1.74
_sO_ Detailed 0.91 1.26 0.59 0.88
Simplified 0.01 0.02 0.10 0.05
_rQ_ Detailed 0.01 0.01 0.08 0.04
Simplified 0.02 0.01 0.03 0.02
_rK_ Detailed 0.02 0.01 0.02 0.01
Simplified 0.02 0.01 0.02 0.02
_rV_ Detailed 0.02 0.01 0.01 0.01
Simplified 0.01 0.01 0.01 0.01
_rO_ Detailed 0.01 0.01 0.01 0.01
##### Table 74: Statistical results for MATH-Counting using Llama-3.1-8B-Instruct on irrelevant responses.
##### Figure 86: Visualization for MATH-Counting using Llama-3.1-8B-Instruct on irrelevant responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
Simplified 1.48 1.52 1.04 1.26
_sQ_ Detailed 0.75 0.68 0.32 0.54
Simplified 0.60 0.47 0.33 0.45
_sK_ Detailed 0.27 0.19 0.19 0.22
Simplified 3.59 1.88 0.79 1.89
_sV_ Detailed 1.54 0.79 0.48 0.86
Geometry Simplified 2.20 2.19 0.94 1.64
_sO_ Detailed 1.05 1.09 0.64 0.88
Simplified 0.01 0.02 0.09 0.04
_rQ_ Detailed 0.01 0.01 0.05 0.02
Simplified 0.02 0.01 0.03 0.02
_rK_ Detailed 0.02 0.01 0.03 0.02
Simplified 0.02 0.01 0.02 0.02
_rV_ Detailed 0.02 0.01 0.02 0.02
Simplified 0.01 0.01 0.01 0.01
_rO_ Detailed 0.01 0.01 0.01 0.01
##### Table 75: Statistical results for MATH-Geometry using Llama-3.1-8B-Instruct on irrelevant responses.
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|Simplified 1.48 1.52 1.04 1.26 sQ Detailed 0.75 0.68 0.32 0.54 Simplified 0.60 0.47 0.33 0.45 sK Detailed 0.27 0.19 0.19 0.22 Simplified 3.59 1.88 0.79 1.89 sV Detailed 1.54 0.79 0.48 0.86 Geometry Simplified 2.20 2.19 0.94 1.64 sO Detailed 1.05 1.09 0.64 0.88 Simplified 0.01 0.02 0.09 0.04 rQ Detailed 0.01 0.01 0.05 0.02 Simplified 0.02 0.01 0.03 0.02 rK Detailed 0.02 0.01 0.03 0.02 Simplified 0.02 0.01 0.02 0.02 rV Detailed 0.02 0.01 0.02 0.02 Simplified 0.01 0.01 0.01 0.01 rO Detailed 0.01 0.01 0.01 0.01|sQ|Simplified Detailed|1.48 1.52 1.04 1.26 0.75 0.68 0.32 0.54|
||sK|Simplified Detailed|0.60 0.47 0.33 0.45 0.27 0.19 0.19 0.22|
||sO|Simplified Detailed|2.20 2.19 0.94 1.64 1.05 1.09 0.64 0.88|
||rQ|Simplified Detailed|0.01 0.02 0.09 0.04 0.01 0.01 0.05 0.02|
||rK|Simplified Detailed|0.02 0.01 0.03 0.02 0.02 0.01 0.03 0.02|
||rV|Simplified Detailed|0.02 0.01 0.02 0.02 0.02 0.01 0.02 0.02|
||rO|Simplified Detailed|0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01|
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|Simplified 1.62 1.72 0.84 1.32 sQ Detailed 0.88 0.85 0.30 0.62 Simplified 0.61 0.63 0.22 0.47 sK Detailed 0.34 0.28 0.12 0.24 Simplified 3.70 2.20 0.81 2.03 sV Detailed 1.61 1.04 0.45 0.94 Counting Simplified 2.10 2.53 0.87 1.74 sO Detailed 0.91 1.26 0.59 0.88 Simplified 0.01 0.02 0.10 0.05 rQ Detailed 0.01 0.01 0.08 0.04 Simplified 0.02 0.01 0.03 0.02 rK Detailed 0.02 0.01 0.02 0.01 Simplified 0.02 0.01 0.02 0.02 rV Detailed 0.02 0.01 0.01 0.01 Simplified 0.01 0.01 0.01 0.01 rO Detailed 0.01 0.01 0.01 0.01|sQ|Simplified Detailed|1.62 1.72 0.84 1.32 0.88 0.85 0.30 0.62|
||sK|Simplified Detailed|0.61 0.63 0.22 0.47 0.34 0.28 0.12 0.24|
||sV|Simplified Detailed|3.70 2.20 0.81 2.03 1.61 1.04 0.45 0.94|
||sO|Simplified Detailed|2.10 2.53 0.87 1.74 0.91 1.26 0.59 0.88|
||rQ|Simplified Detailed|0.01 0.02 0.10 0.05 0.01 0.01 0.08 0.04|
||rK|Simplified Detailed|0.02 0.01 0.03 0.02 0.02 0.01 0.02 0.01|
||rV|Simplified Detailed|0.02 0.01 0.02 0.02 0.02 0.01 0.01 0.01|
||rO|Simplified Detailed|0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01|
-----
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 15.97 18.81 16.46 16.48 sQ Simplified 2.50 2.44 1.29 1.93 Detailed 1.14 1.14 0.65 0.90 None 9.54 11.64 5.46 8.52 sK Simplified 0.95 1.11 0.36 0.75 Detailed 0.47 0.45 0.20 0.35 None 61.59 36.45 13.56 34.03 sV Simplified 4.89 3.93 0.94 2.93 Detailed 2.19 1.62 0.61 1.33 None 26.92 42.07 11.22 24.89 GSM8K sO Simplified 2.70 4.31 1.13 2.51 Detailed 1.21 1.87 0.79 1.22 None 0.01 0.03 0.08 0.04 rQ Simplified 0.01 0.01 0.10 0.04 Detailed 0.01 0.01 0.11 0.05 None 0.03 0.04 0.05 0.04 rK Simplified 0.02 0.01 0.03 0.02 Detailed 0.02 0.02 0.03 0.02 None 0.02 0.03 0.03 0.03 rV Simplified 0.02 0.01 0.01 0.02 Detailed 0.02 0.01 0.01 0.02 None 0.02 0.04 0.08 0.05 rO Simplified 0.01 0.01 0.02 0.01 Detailed 0.01 0.01 0.02 0.01|sQ|None Simplified Detailed|15.97 18.81 16.46 16.48 2.50 2.44 1.29 1.93 1.14 1.14 0.65 0.90|
||sK|None Simplified Detailed|9.54 11.64 5.46 8.52 0.95 1.11 0.36 0.75 0.47 0.45 0.20 0.35|
||sV|None Simplified Detailed|61.59 36.45 13.56 34.03 4.89 3.93 0.94 2.93 2.19 1.62 0.61 1.33|
||sO|None Simplified Detailed|26.92 42.07 11.22 24.89 2.70 4.31 1.13 2.51 1.21 1.87 0.79 1.22|
||rQ|None Simplified Detailed|0.01 0.03 0.08 0.04 0.01 0.01 0.10 0.04 0.01 0.01 0.11 0.05|
||rK|None Simplified Detailed|0.03 0.04 0.05 0.04 0.02 0.01 0.03 0.02 0.02 0.02 0.03 0.02|
||rV|None Simplified Detailed|0.02 0.03 0.03 0.03 0.02 0.01 0.01 0.02 0.02 0.01 0.01 0.02|
||rO|None Simplified Detailed|0.02 0.04 0.08 0.05 0.01 0.01 0.02 0.01 0.01 0.01 0.02 0.01|
##### Figure 87: Visualization for MATH-Geometry using Llama-3.1-8B-Instruct on irrelevant responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
None 19.27 15.45 10.85 14.60
_sQ_ Simplified 3.21 3.18 1.62 2.50
Detailed 1.26 1.17 0.60 0.94
None 10.46 10.93 3.14 7.81
_sK_ Simplified 1.35 1.33 0.52 1.01
Detailed 0.50 0.45 0.19 0.36
None 64.14 30.36 8.73 30.88
_sV_ Simplified 9.00 4.50 1.13 4.35
Detailed 2.63 1.69 0.57 1.48
None 25.95 35.16 7.56 21.66
AQuA _sO_ Simplified 3.41 5.21 1.29 3.14
Detailed 1.22 1.90 0.73 1.23
None 0.02 0.07 0.10 0.06
_rQ_ Simplified 0.01 0.01 0.09 0.04
Detailed 0.01 0.01 0.09 0.04
None 0.03 0.04 0.06 0.04
_rK_ Simplified 0.02 0.02 0.03 0.02
Detailed 0.02 0.01 0.03 0.02
None 0.02 0.03 0.03 0.03
_rV_ Simplified 0.02 0.02 0.02 0.02
Detailed 0.02 0.01 0.01 0.02
None 0.02 0.04 0.08 0.05
_rO_ Simplified 0.01 0.02 0.02 0.01
Detailed 0.01 0.01 0.01 0.01
##### Table 76: Statistical results for AQuA using Llama-3.1- 8B-Instruct on irrelevant responses.
##### Table 77: Statistical results for GSM8K using Llama- 3.1-8B-Instruct on irrelevant responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
None 14.08 20.47 5.98 12.84
_sQ_ Simplified 1.97 2.88 1.63 2.06
Detailed 0.93 0.92 0.38 0.69
None 7.59 8.34 4.67 6.68
_sK_ Simplified 0.81 1.06 0.53 0.77
Detailed 0.34 0.36 0.19 0.28
None 61.77 28.52 11.64 30.85
_sV_ Simplified 8.18 3.55 1.25 3.91
Detailed 2.21 1.28 0.64 1.25
None 24.75 28.78 7.41 18.74
StrategyQA _sO_ Simplified 3.66 3.90 1.54 2.86
Detailed 1.36 1.40 0.88 1.16
None 0.01 0.04 0.09 0.05
_rQ_ Simplified 0.01 0.02 0.08 0.04
Detailed 0.02 0.01 0.06 0.03
None 0.02 0.02 0.03 0.02
_rK_ Simplified 0.02 0.01 0.02 0.02
Detailed 0.01 0.01 0.02 0.02
None 0.03 0.03 0.03 0.03
_rV_ Simplified 0.03 0.02 0.02 0.02
Detailed 0.02 0.01 0.01 0.02
None 0.02 0.04 0.08 0.05
_rO_ Simplified 0.01 0.02 0.02 0.02
Detailed 0.01 0.01 0.01 0.01
##### Table 78: Statistical results for StrategyQA using Llama- 3.1-8B-Instruct on irrelevant responses.
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 19.27 15.45 10.85 14.60 sQ Simplified 3.21 3.18 1.62 2.50 Detailed 1.26 1.17 0.60 0.94 None 10.46 10.93 3.14 7.81 sK Simplified 1.35 1.33 0.52 1.01 Detailed 0.50 0.45 0.19 0.36 None 64.14 30.36 8.73 30.88 sV Simplified 9.00 4.50 1.13 4.35 Detailed 2.63 1.69 0.57 1.48 None 25.95 35.16 7.56 21.66 AQuA sO Simplified 3.41 5.21 1.29 3.14 Detailed 1.22 1.90 0.73 1.23 None 0.02 0.07 0.10 0.06 rQ Simplified 0.01 0.01 0.09 0.04 Detailed 0.01 0.01 0.09 0.04 None 0.03 0.04 0.06 0.04 rK Simplified 0.02 0.02 0.03 0.02 Detailed 0.02 0.01 0.03 0.02 None 0.02 0.03 0.03 0.03 rV Simplified 0.02 0.02 0.02 0.02 Detailed 0.02 0.01 0.01 0.02 None 0.02 0.04 0.08 0.05 rO Simplified 0.01 0.02 0.02 0.01 Detailed 0.01 0.01 0.01 0.01|sQ|None Simplified Detailed|19.27 15.45 10.85 14.60 3.21 3.18 1.62 2.50 1.26 1.17 0.60 0.94|
||sK|None Simplified Detailed|10.46 10.93 3.14 7.81 1.35 1.33 0.52 1.01 0.50 0.45 0.19 0.36|
||sV|None Simplified Detailed|64.14 30.36 8.73 30.88 9.00 4.50 1.13 4.35 2.63 1.69 0.57 1.48|
||sO|None Simplified Detailed|25.95 35.16 7.56 21.66 3.41 5.21 1.29 3.14 1.22 1.90 0.73 1.23|
||rQ|None Simplified Detailed|0.02 0.07 0.10 0.06 0.01 0.01 0.09 0.04 0.01 0.01 0.09 0.04|
||rK|None Simplified Detailed|0.03 0.04 0.06 0.04 0.02 0.02 0.03 0.02 0.02 0.01 0.03 0.02|
||rV|None Simplified Detailed|0.02 0.03 0.03 0.03 0.02 0.02 0.02 0.02 0.02 0.01 0.01 0.02|
||rO|None Simplified Detailed|0.02 0.04 0.08 0.05 0.01 0.02 0.02 0.01 0.01 0.01 0.01 0.01|
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 14.08 20.47 5.98 12.84 sQ Simplified 1.97 2.88 1.63 2.06 Detailed 0.93 0.92 0.38 0.69 None 7.59 8.34 4.67 6.68 sK Simplified 0.81 1.06 0.53 0.77 Detailed 0.34 0.36 0.19 0.28 None 61.77 28.52 11.64 30.85 sV Simplified 8.18 3.55 1.25 3.91 Detailed 2.21 1.28 0.64 1.25 None 24.75 28.78 7.41 18.74 StrategyQA sO Simplified 3.66 3.90 1.54 2.86 Detailed 1.36 1.40 0.88 1.16 None 0.01 0.04 0.09 0.05 rQ Simplified 0.01 0.02 0.08 0.04 Detailed 0.02 0.01 0.06 0.03 None 0.02 0.02 0.03 0.02 rK Simplified 0.02 0.01 0.02 0.02 Detailed 0.01 0.01 0.02 0.02 None 0.03 0.03 0.03 0.03 rV Simplified 0.03 0.02 0.02 0.02 Detailed 0.02 0.01 0.01 0.02 None 0.02 0.04 0.08 0.05 rO Simplified 0.01 0.02 0.02 0.02 Detailed 0.01 0.01 0.01 0.01|sQ|None Simplified Detailed|14.08 20.47 5.98 12.84 1.97 2.88 1.63 2.06 0.93 0.92 0.38 0.69|
||sK|None Simplified Detailed|7.59 8.34 4.67 6.68 0.81 1.06 0.53 0.77 0.34 0.36 0.19 0.28|
||sV|None Simplified Detailed|61.77 28.52 11.64 30.85 8.18 3.55 1.25 3.91 2.21 1.28 0.64 1.25|
||sO|None Simplified Detailed|24.75 28.78 7.41 18.74 3.66 3.90 1.54 2.86 1.36 1.40 0.88 1.16|
||rQ|None Simplified Detailed|0.01 0.04 0.09 0.05 0.01 0.02 0.08 0.04 0.02 0.01 0.06 0.03|
||rK|None Simplified Detailed|0.02 0.02 0.03 0.02 0.02 0.01 0.02 0.02 0.01 0.01 0.02 0.02|
||rV|None Simplified Detailed|0.03 0.03 0.03 0.03 0.03 0.02 0.02 0.02 0.02 0.01 0.01 0.02|
||rO|None Simplified Detailed|0.02 0.04 0.08 0.05 0.01 0.02 0.02 0.02 0.01 0.01 0.01 0.01|
-----
##### Figure 88: Visualization for AQuA using Llama-3.1-8B-Instruct on irrelevant responses.
Figure 89: Visualization for GSM8K using Llama-3.1-8B-Instruct on irrelevant responses.
-----
##### Figure 90: Visualization for StrategyQA using Llama-3.1-8B-Instruct on irrelevant responses.
Figure 91: Visualization for ECQA using Llama-3.1-8B-Instruct on irrelevant responses.
-----
##### Figure 92: Visualization for CREAK using Llama-3.1-8B-Instruct on irrelevant responses.
Figure 93: Visualization for Sensemaking using Llama-3.1-8B-Instruct on irrelevant responses.
-----
##### Figure 94: Visualization for Wiki tasks using Llama-3.1-8B-Instruct on irrelevant responses.
-----
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 8.17 9.72 2.41 6.35 sQ Simplified 3.69 4.24 1.97 3.13 Detailed 1.04 1.09 0.59 0.84 None 3.67 4.53 2.74 3.56 sK Simplified 1.55 1.94 0.73 1.34 Detailed 0.45 0.41 0.24 0.35 None 19.95 13.96 5.37 12.18 sV Simplified 9.01 6.34 2.01 5.26 Detailed 1.93 1.52 0.74 1.29 None 10.11 14.51 3.31 8.78 Sensemaking sO Simplified 4.06 7.06 1.73 4.08 Detailed 1.10 1.84 0.93 1.24 None 0.02 0.06 0.09 0.05 rQ Simplified 0.01 0.01 0.08 0.04 Detailed 0.01 0.01 0.10 0.04 None 0.02 0.02 0.04 0.03 rK Simplified 0.02 0.01 0.03 0.02 Detailed 0.02 0.02 0.02 0.02 None 0.02 0.03 0.03 0.03 rV Simplified 0.02 0.02 0.02 0.02 Detailed 0.01 0.01 0.01 0.01 None 0.01 0.04 0.07 0.04 rO Simplified 0.01 0.02 0.03 0.02 Detailed 0.01 0.01 0.02 0.01|sQ|None Simplified Detailed|8.17 9.72 2.41 6.35 3.69 4.24 1.97 3.13 1.04 1.09 0.59 0.84|
||sK|None Simplified Detailed|3.67 4.53 2.74 3.56 1.55 1.94 0.73 1.34 0.45 0.41 0.24 0.35|
||sV|None Simplified Detailed|19.95 13.96 5.37 12.18 9.01 6.34 2.01 5.26 1.93 1.52 0.74 1.29|
||sO|None Simplified Detailed|10.11 14.51 3.31 8.78 4.06 7.06 1.73 4.08 1.10 1.84 0.93 1.24|
||rQ|None Simplified Detailed|0.02 0.06 0.09 0.05 0.01 0.01 0.08 0.04 0.01 0.01 0.10 0.04|
||rK|None Simplified Detailed|0.02 0.02 0.04 0.03 0.02 0.01 0.03 0.02 0.02 0.02 0.02 0.02|
||rV|None Simplified Detailed|0.02 0.03 0.03 0.03 0.02 0.02 0.02 0.02 0.01 0.01 0.01 0.01|
||rO|None Simplified Detailed|0.01 0.04 0.07 0.04 0.01 0.02 0.03 0.02 0.01 0.01 0.02 0.01|
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 12.78 12.97 8.86 11.03 sQ Simplified 2.73 3.31 2.05 2.57 Detailed 1.11 1.09 0.55 0.85 None 5.51 6.16 4.31 5.22 sK Simplified 1.08 1.33 0.83 1.04 Detailed 0.41 0.43 0.29 0.35 None 41.83 21.02 9.15 21.64 sV Simplified 7.75 4.50 1.56 4.14 Detailed 2.10 1.58 0.76 1.36 None 16.65 25.65 6.44 15.35 ECQA sO Simplified 3.12 5.13 1.64 3.11 Detailed 1.21 1.88 0.94 1.27 None 0.01 0.02 0.07 0.04 rQ Simplified 0.01 0.02 0.10 0.05 Detailed 0.01 0.02 0.07 0.04 None 0.02 0.02 0.04 0.03 rK Simplified 0.02 0.01 0.03 0.02 Detailed 0.01 0.02 0.03 0.02 None 0.02 0.02 0.02 0.02 rV Simplified 0.02 0.02 0.02 0.02 Detailed 0.01 0.01 0.01 0.01 None 0.02 0.03 0.05 0.03 rO Simplified 0.01 0.02 0.02 0.02 Detailed 0.01 0.01 0.01 0.01|sQ|None Simplified Detailed|12.78 12.97 8.86 11.03 2.73 3.31 2.05 2.57 1.11 1.09 0.55 0.85|
||sK|None Simplified Detailed|5.51 6.16 4.31 5.22 1.08 1.33 0.83 1.04 0.41 0.43 0.29 0.35|
||sV|None Simplified Detailed|41.83 21.02 9.15 21.64 7.75 4.50 1.56 4.14 2.10 1.58 0.76 1.36|
||sO|None Simplified Detailed|16.65 25.65 6.44 15.35 3.12 5.13 1.64 3.11 1.21 1.88 0.94 1.27|
||rQ|None Simplified Detailed|0.01 0.02 0.07 0.04 0.01 0.02 0.10 0.05 0.01 0.02 0.07 0.04|
||rK|None Simplified Detailed|0.02 0.02 0.04 0.03 0.02 0.01 0.03 0.02 0.01 0.02 0.03 0.02|
||rV|None Simplified Detailed|0.02 0.02 0.02 0.02 0.02 0.02 0.02 0.02 0.01 0.01 0.01 0.01|
||rO|None Simplified Detailed|0.02 0.03 0.05 0.03 0.01 0.02 0.02 0.02 0.01 0.01 0.01 0.01|
##### Table 79: Statistical results for ECQA using Llama-3.1- 8B-Instruct on irrelevant responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
None 23.94 27.74 9.32 19.33
_sQ_ Simplified 3.05 4.03 2.41 3.02
Detailed 1.01 1.03 0.59 0.82
None 12.65 16.29 10.48 12.68
_sK_ Simplified 1.21 1.61 1.04 1.25
Detailed 0.35 0.39 0.32 0.34
None 99.38 52.11 28.59 54.88
_sV_ Simplified 11.51 5.29 1.54 5.46
Detailed 2.59 1.47 0.66 1.42
None 38.19 54.19 15.59 33.78
CREAK _sO_ Simplified 4.58 6.11 1.63 3.84
Detailed 1.50 1.76 0.91 1.31
None 0.01 0.03 0.11 0.05
_rQ_ Simplified 0.01 0.02 0.09 0.04
Detailed 0.01 0.02 0.07 0.03
None 0.03 0.03 0.05 0.04
_rK_ Simplified 0.02 0.01 0.03 0.02
Detailed 0.02 0.02 0.03 0.02
None 0.02 0.02 0.03 0.02
_rV_ Simplified 0.02 0.02 0.02 0.02
Detailed 0.02 0.01 0.02 0.02
None 0.02 0.04 0.08 0.05
_rO_ Simplified 0.01 0.02 0.03 0.02
Detailed 0.01 0.01 0.01 0.01
##### Table 80: Statistical results for CREAK using Llama- 3.1-8B-Instruct on irrelevant responses.
##### Table 81: Statistical results for Sensemaking using Llama-3.1-8B-Instruct on irrelevant responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
Len 100 1.61 2.07 1.37 1.63
_sQ_ Len 500 1.04 1.01 0.53 0.82
Len 1000 0.86 0.76 0.36 0.62
Unpopular 1.94 2.58 1.72 2.01
Len 100 0.59 0.77 0.41 0.59
_sK_ Len 500 0.36 0.35 0.18 0.30
Len 1000 0.32 0.26 0.13 0.23
Unpopular 0.82 0.89 0.65 0.79
Len 100 7.75 2.36 0.96 3.37
_sV_ Len 500 4.06 1.22 0.67 1.82
Len 1000 3.21 0.96 0.58 1.45
Unpopular 16.50 3.04 0.92 6.14
Len 100 3.52 2.80 1.25 2.43
_sO_ Len 500 2.10 1.45 0.92 1.45
Wiki Len 1000 1.74 1.15 0.82 1.21
Unpopular 6.20 3.38 1.46 3.51
Len 100 0.02 0.02 0.08 0.04
_rQ_ Len 500 0.02 0.02 0.06 0.03
Len 1000 0.03 0.01 0.05 0.03
Unpopular 0.03 0.02 0.08 0.04
Len 100 0.01 0.01 0.02 0.02
_rK_ Len 500 0.02 0.01 0.01 0.01
Len 1000 0.03 0.01 0.01 0.02
Unpopular 0.02 0.01 0.02 0.02
Len 100 0.03 0.02 0.02 0.02
_rV_ Len 500 0.03 0.01 0.01 0.02
Len 1000 0.03 0.01 0.01 0.02
Unpopular 0.05 0.02 0.02 0.03
Len 100 0.01 0.02 0.02 0.02
_rO_ Len 500 0.01 0.01 0.01 0.01
Len 1000 0.01 0.01 0.01 0.01
Unpopular 0.01 0.02 0.01 0.02
##### Table 82: Statistical results for Wiki using Llama-3.1- 8B-Instruct on irrelevant responses.
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 23.94 27.74 9.32 19.33 sQ Simplified 3.05 4.03 2.41 3.02 Detailed 1.01 1.03 0.59 0.82 None 12.65 16.29 10.48 12.68 sK Simplified 1.21 1.61 1.04 1.25 Detailed 0.35 0.39 0.32 0.34 None 99.38 52.11 28.59 54.88 sV Simplified 11.51 5.29 1.54 5.46 Detailed 2.59 1.47 0.66 1.42 None 38.19 54.19 15.59 33.78 CREAK sO Simplified 4.58 6.11 1.63 3.84 Detailed 1.50 1.76 0.91 1.31 None 0.01 0.03 0.11 0.05 rQ Simplified 0.01 0.02 0.09 0.04 Detailed 0.01 0.02 0.07 0.03 None 0.03 0.03 0.05 0.04 rK Simplified 0.02 0.01 0.03 0.02 Detailed 0.02 0.02 0.03 0.02 None 0.02 0.02 0.03 0.02 rV Simplified 0.02 0.02 0.02 0.02 Detailed 0.02 0.01 0.02 0.02 None 0.02 0.04 0.08 0.05 rO Simplified 0.01 0.02 0.03 0.02 Detailed 0.01 0.01 0.01 0.01|sQ|None Simplified Detailed|23.94 27.74 9.32 19.33 3.05 4.03 2.41 3.02 1.01 1.03 0.59 0.82|
||sK|None Simplified Detailed|12.65 16.29 10.48 12.68 1.21 1.61 1.04 1.25 0.35 0.39 0.32 0.34|
||sV|None Simplified Detailed|99.38 52.11 28.59 54.88 11.51 5.29 1.54 5.46 2.59 1.47 0.66 1.42|
||sO|None Simplified Detailed|38.19 54.19 15.59 33.78 4.58 6.11 1.63 3.84 1.50 1.76 0.91 1.31|
||rQ|None Simplified Detailed|0.01 0.03 0.11 0.05 0.01 0.02 0.09 0.04 0.01 0.02 0.07 0.03|
||rK|None Simplified Detailed|0.03 0.03 0.05 0.04 0.02 0.01 0.03 0.02 0.02 0.02 0.03 0.02|
||rV|None Simplified Detailed|0.02 0.02 0.03 0.02 0.02 0.02 0.02 0.02 0.02 0.01 0.02 0.02|
||rO|None Simplified Detailed|0.02 0.04 0.08 0.05 0.01 0.02 0.03 0.02 0.01 0.01 0.01 0.01|
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|Len 100 1.61 2.07 1.37 1.63 sQ Len 500 1.04 1.01 0.53 0.82 Len 1000 0.86 0.76 0.36 0.62 Unpopular 1.94 2.58 1.72 2.01 Len 100 0.59 0.77 0.41 0.59 sK Len 500 0.36 0.35 0.18 0.30 Len 1000 0.32 0.26 0.13 0.23 Unpopular 0.82 0.89 0.65 0.79 Len 100 7.75 2.36 0.96 3.37 sV Len 500 4.06 1.22 0.67 1.82 Len 1000 3.21 0.96 0.58 1.45 Unpopular 16.50 3.04 0.92 6.14 Len 100 3.52 2.80 1.25 2.43 sO Len 500 2.10 1.45 0.92 1.45 Wiki Len 1000 1.74 1.15 0.82 1.21 Unpopular 6.20 3.38 1.46 3.51 Len 100 0.02 0.02 0.08 0.04 rQ Len 500 0.02 0.02 0.06 0.03 Len 1000 0.03 0.01 0.05 0.03 Unpopular 0.03 0.02 0.08 0.04 Len 100 0.01 0.01 0.02 0.02 rK Len 500 0.02 0.01 0.01 0.01 Len 1000 0.03 0.01 0.01 0.02 Unpopular 0.02 0.01 0.02 0.02 Len 100 0.03 0.02 0.02 0.02 rV Len 500 0.03 0.01 0.01 0.02 Len 1000 0.03 0.01 0.01 0.02 Unpopular 0.05 0.02 0.02 0.03 Len 100 0.01 0.02 0.02 0.02 rO Len 500 0.01 0.01 0.01 0.01 Len 1000 0.01 0.01 0.01 0.01 Unpopular 0.01 0.02 0.01 0.02|sQ|Len 100 Len 500 Len 1000 Unpopular|1.61 2.07 1.37 1.63 1.04 1.01 0.53 0.82 0.86 0.76 0.36 0.62 1.94 2.58 1.72 2.01|
||sK|Len 100 Len 500 Len 1000 Unpopular|0.59 0.77 0.41 0.59 0.36 0.35 0.18 0.30 0.32 0.26 0.13 0.23 0.82 0.89 0.65 0.79|
||sV|Len 100 Len 500 Len 1000 Unpopular|7.75 2.36 0.96 3.37 4.06 1.22 0.67 1.82 3.21 0.96 0.58 1.45 16.50 3.04 0.92 6.14|
||sO|Len 100 Len 500 Len 1000 Unpopular|3.52 2.80 1.25 2.43 2.10 1.45 0.92 1.45 1.74 1.15 0.82 1.21 6.20 3.38 1.46 3.51|
||rQ|Len 100 Len 500 Len 1000 Unpopular|0.02 0.02 0.08 0.04 0.02 0.02 0.06 0.03 0.03 0.01 0.05 0.03 0.03 0.02 0.08 0.04|
||rK|Len 100 Len 500 Len 1000 Unpopular|0.01 0.01 0.02 0.02 0.02 0.01 0.01 0.01 0.03 0.01 0.01 0.02 0.02 0.01 0.02 0.02|
||rV|Len 100 Len 500 Len 1000 Unpopular|0.03 0.02 0.02 0.02 0.03 0.01 0.01 0.02 0.03 0.01 0.01 0.02 0.05 0.02 0.02 0.03|
||rO|Len 100 Len 500 Len 1000 Unpopular|0.01 0.02 0.02 0.02 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.02 0.01 0.02|
-----
### D Results on Qwen2-1.5B
#### D.1 Pre-trained LLM on Correct Responses
D.1.1 Reasoning Tasks
The visualizations and statistical results on MATH tasks: MATH-Algebra (Figure 135, Table 83), MATH-Counting (Figure 136, Table 84), MATH- Geometry (Figure 137, Table 85).
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
Simplified 0.32 0.32 0.51 0.38
_sQ_ Detailed 0.19 0.24 0.30 0.25
Simplified 0.32 0.34 0.54 0.39
_sK_ Detailed 0.17 0.21 0.40 0.25
Simplified 1.61 1.15 0.42 1.02
_sV_ Detailed 0.91 0.54 0.27 0.53
Algebra Simplified 1.15 1.24 0.37 0.97
_sO_ Detailed 0.66 0.66 0.24 0.54
Simplified 0.01 0.01 0.03 0.01
_rQ_ Detailed 0.01 0.01 0.03 0.01
Simplified 0.02 0.03 0.01 0.02
_rK_ Detailed 0.02 0.03 0.02 0.03
Simplified 0.03 0.02 0.04 0.03
_rV_ Detailed 0.03 0.02 0.03 0.02
Simplified 0.01 0.02 0.07 0.03
_rO_ Detailed 0.01 0.01 0.06 0.03
##### Table 83: Statistical results for MATH-Algebra using Qwen2-1.5B on correct responses.
Figure 95: Visualization for MATH-Algebra using Qwen2-1.5B on correct responses.
#### The visualizations and statistical results on other reasoning tasks: AQuA (Figure 138, Table 86), GSM8K (Figure 139, Table 87), StrategyQA (Fig- ure 140, Table 88), ECQA (Figure 141, Table 89), CREAK (Figure 142, Table 90), Sensemaking (Fig- ure 143, Table 91).
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|Simplified 0.37 0.34 0.49 0.41 sQ Detailed 0.23 0.29 0.32 0.30 Simplified 0.36 0.30 0.55 0.38 sK Detailed 0.20 0.22 0.47 0.27 Simplified 1.81 1.18 0.48 1.09 sV Detailed 1.06 0.56 0.30 0.58 Counting Simplified 1.26 1.25 0.46 1.03 sO Detailed 0.77 0.73 0.31 0.62 Simplified 0.01 0.01 0.02 0.01 rQ Detailed 0.01 0.01 0.02 0.01 Simplified 0.02 0.03 0.02 0.02 rK Detailed 0.02 0.03 0.02 0.03 Simplified 0.03 0.02 0.04 0.03 rV Detailed 0.02 0.02 0.03 0.02 Simplified 0.02 0.02 0.07 0.03 rO Detailed 0.01 0.02 0.06 0.02|sQ|Simplified Detailed|0.37 0.34 0.49 0.41 0.23 0.29 0.32 0.30|
||sK|Simplified Detailed|0.36 0.30 0.55 0.38 0.20 0.22 0.47 0.27|
||sV|Simplified Detailed|1.81 1.18 0.48 1.09 1.06 0.56 0.30 0.58|
||sO|Simplified Detailed|1.26 1.25 0.46 1.03 0.77 0.73 0.31 0.62|
||rQ|Simplified Detailed|0.01 0.01 0.02 0.01 0.01 0.01 0.02 0.01|
||rK|Simplified Detailed|0.02 0.03 0.02 0.02 0.02 0.03 0.02 0.03|
##### Table 84: Statistical results for MATH-Counting using Qwen2-1.5B on correct responses.
Figure 96: Visualization for MATH-Counting using Qwen2-1.5B on correct responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
Simplified 0.36 0.40 0.61 0.46
_sQ_ Detailed 0.26 0.32 0.40 0.34
Simplified 0.32 0.30 0.74 0.41
_sK_ Detailed 0.23 0.23 0.52 0.30
Simplified 1.78 1.06 0.53 1.03
_sV_ Detailed 1.27 0.60 0.34 0.65
Geometry Simplified 1.42 1.35 0.40 1.08
_sO_ Detailed 0.95 0.84 0.30 0.70
Simplified 0.01 0.01 0.02 0.01
_rQ_ Detailed 0.01 0.01 0.03 0.01
Simplified 0.01 0.03 0.01 0.02
_rK_ Detailed 0.02 0.03 0.02 0.03
Simplified 0.03 0.02 0.03 0.02
_rV_ Detailed 0.02 0.02 0.03 0.02
Simplified 0.01 0.01 0.05 0.02
_rO_ Detailed 0.01 0.01 0.05 0.02
##### Table 85: Statistical results for MATH-Geometry using Qwen2-1.5B on correct responses.
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|Simplified 0.32 0.32 0.51 0.38 sQ Detailed 0.19 0.24 0.30 0.25 Simplified 0.32 0.34 0.54 0.39 sK Detailed 0.17 0.21 0.40 0.25 Simplified 1.61 1.15 0.42 1.02 sV Detailed 0.91 0.54 0.27 0.53 Algebra Simplified 1.15 1.24 0.37 0.97 sO Detailed 0.66 0.66 0.24 0.54 Simplified 0.01 0.01 0.03 0.01 rQ Detailed 0.01 0.01 0.03 0.01 Simplified 0.02 0.03 0.01 0.02 rK Detailed 0.02 0.03 0.02 0.03 Simplified 0.03 0.02 0.04 0.03 rV Detailed 0.03 0.02 0.03 0.02 Simplified 0.01 0.02 0.07 0.03 rO Detailed 0.01 0.01 0.06 0.03||||
||sK|Simplified Detailed|0.32 0.34 0.54 0.39 0.17 0.21 0.40 0.25|
||sV|Simplified Detailed|1.61 1.15 0.42 1.02 0.91 0.54 0.27 0.53|
||sO|Simplified Detailed|1.15 1.24 0.37 0.97 0.66 0.66 0.24 0.54|
||rQ|Simplified Detailed|0.01 0.01 0.03 0.01 0.01 0.01 0.03 0.01|
||rK|Simplified Detailed|0.02 0.03 0.01 0.02 0.02 0.03 0.02 0.03|
||rV|Simplified Detailed|0.03 0.02 0.04 0.03 0.03 0.02 0.03 0.02|
||rO|Simplified Detailed|0.01 0.02 0.07 0.03 0.01 0.01 0.06 0.03|
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|Simplified 0.36 0.40 0.61 0.46 sQ Detailed 0.26 0.32 0.40 0.34 Simplified 0.32 0.30 0.74 0.41 sK Detailed 0.23 0.23 0.52 0.30 Simplified 1.78 1.06 0.53 1.03 sV Detailed 1.27 0.60 0.34 0.65 Geometry Simplified 1.42 1.35 0.40 1.08 sO Detailed 0.95 0.84 0.30 0.70 Simplified 0.01 0.01 0.02 0.01 rQ Detailed 0.01 0.01 0.03 0.01 Simplified 0.01 0.03 0.01 0.02 rK Detailed 0.02 0.03 0.02 0.03 Simplified 0.03 0.02 0.03 0.02 rV Detailed 0.02 0.02 0.03 0.02 Simplified 0.01 0.01 0.05 0.02 rO Detailed 0.01 0.01 0.05 0.02|sQ|Simplified Detailed|0.36 0.40 0.61 0.46 0.26 0.32 0.40 0.34|
||sK|Simplified Detailed|0.32 0.30 0.74 0.41 0.23 0.23 0.52 0.30|
||sV|Simplified Detailed|1.78 1.06 0.53 1.03 1.27 0.60 0.34 0.65|
||sO|Simplified Detailed|1.42 1.35 0.40 1.08 0.95 0.84 0.30 0.70|
||rQ|Simplified Detailed|0.01 0.01 0.02 0.01 0.01 0.01 0.03 0.01|
||rK|Simplified Detailed|0.01 0.03 0.01 0.02 0.02 0.03 0.02 0.03|
||rV|Simplified Detailed|0.03 0.02 0.03 0.02 0.02 0.02 0.03 0.02|
||rO|Simplified Detailed|0.01 0.01 0.05 0.02 0.01 0.01 0.05 0.02|
-----
##### Figure 97: Visualization for MATH-Geometry using Qwen2-1.5B on correct responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
None 5.76 4.13 3.49 4.42
_sQ_ Simplified 0.89 0.52 0.77 0.69
Detailed 0.23 0.28 0.29 0.28
None 7.20 6.29 8.40 7.06
_sK_ Simplified 1.01 0.56 1.11 0.81
Detailed 0.22 0.21 0.42 0.27
None 37.29 16.12 3.94 17.32
_sV_ Simplified 5.08 2.14 0.86 2.36
Detailed 1.15 0.62 0.33 0.64
None 23.79 14.35 3.04 12.91
AQuA _sO_ Simplified 3.31 2.18 0.63 1.97
Detailed 0.82 0.75 0.29 0.64
None 0.03 0.06 0.21 0.09
_rQ_ Simplified 0.02 0.01 0.02 0.02
Detailed 0.01 0.01 0.02 0.01
None 0.04 0.04 0.14 0.06
_rK_ Simplified 0.03 0.03 0.02 0.03
Detailed 0.02 0.03 0.01 0.02
None 0.04 0.06 0.03 0.05
_rV_ Simplified 0.04 0.03 0.03 0.03
Detailed 0.03 0.02 0.03 0.02
None 0.02 0.04 0.09 0.05
_rO_ Simplified 0.02 0.02 0.08 0.04
Detailed 0.01 0.02 0.06 0.03
##### Table 86: Statistical results for AQuA using Qwen2- 1.5B on correct responses.
#### D.1.2 Wiki Tasks
The visualizations and statistical results on Wiki tasks are shown in Figure 144 and Table 92.
D.2 Pre-trained LLM on Irrelevant Responses
D.2.1 Reasoning Tasks
The visualizations and statistical results on MATH tasks: MATH-Algebra (Figure 145, Table 93), MATH-Counting (Figure 146, Table 94), MATH- Geometry (Figure 147, Table 95). The visualizations and statistical results on other
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 3.39 2.78 6.73 3.95 sQ Simplified 0.29 0.27 0.32 0.32 Detailed 0.22 0.23 0.26 0.25 None 4.47 5.53 9.31 6.02 sK Simplified 0.34 0.38 0.57 0.42 Detailed 0.22 0.25 0.45 0.30 None 23.57 14.68 5.20 13.66 sV Simplified 1.57 1.12 0.37 0.99 Detailed 1.14 0.69 0.28 0.67 None 14.85 14.41 3.40 11.45 GSM8K sO Simplified 0.99 1.12 0.32 0.88 Detailed 0.77 0.79 0.26 0.64 None 0.02 0.04 0.13 0.06 rQ Simplified 0.02 0.01 0.02 0.02 Detailed 0.02 0.01 0.03 0.01 None 0.04 0.03 0.04 0.04 rK Simplified 0.02 0.03 0.03 0.03 Detailed 0.02 0.03 0.02 0.03 None 0.03 0.06 0.04 0.05 rV Simplified 0.03 0.03 0.03 0.03 Detailed 0.03 0.02 0.03 0.03 None 0.03 0.05 0.10 0.06 rO Simplified 0.02 0.02 0.08 0.04 Detailed 0.01 0.02 0.06 0.03|sQ|None Simplified Detailed|3.39 2.78 6.73 3.95 0.29 0.27 0.32 0.32 0.22 0.23 0.26 0.25|
||sK|None Simplified Detailed|4.47 5.53 9.31 6.02 0.34 0.38 0.57 0.42 0.22 0.25 0.45 0.30|
||sV|None Simplified Detailed|23.57 14.68 5.20 13.66 1.57 1.12 0.37 0.99 1.14 0.69 0.28 0.67|
||sO|None Simplified Detailed|14.85 14.41 3.40 11.45 0.99 1.12 0.32 0.88 0.77 0.79 0.26 0.64|
||rQ|None Simplified Detailed|0.02 0.04 0.13 0.06 0.02 0.01 0.02 0.02 0.02 0.01 0.03 0.01|
||rK|None Simplified Detailed|0.04 0.03 0.04 0.04 0.02 0.03 0.03 0.03 0.02 0.03 0.02 0.03|
||rV|None Simplified Detailed|0.03 0.06 0.04 0.05 0.03 0.03 0.03 0.03 0.03 0.02 0.03 0.03|
||rO|None Simplified Detailed|0.03 0.05 0.10 0.06 0.02 0.02 0.08 0.04 0.01 0.02 0.06 0.03|
##### Table 87: Statistical results for GSM8K using Qwen2- 1.5B on correct responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
None 3.03 1.74 0.90 1.87
_sQ_ Simplified 0.58 0.32 0.40 0.41
Detailed 0.27 0.33 0.27 0.32
None 5.11 3.58 1.88 3.57
_sK_ Simplified 0.73 0.39 0.45 0.48
Detailed 0.23 0.22 0.49 0.28
None 28.52 10.02 3.49 12.07
_sV_ Simplified 4.51 1.53 1.05 2.04
Detailed 1.57 0.63 0.47 0.77
None 20.65 6.22 4.33 8.83
StrategyQA _sO_ Simplified 3.06 1.46 0.94 1.65
Detailed 1.00 0.90 0.52 0.81
None 0.04 0.07 0.07 0.06
_rQ_ Simplified 0.01 0.01 0.02 0.01
Detailed 0.01 0.01 0.01 0.01
None 0.05 0.04 0.09 0.05
_rK_ Simplified 0.03 0.03 0.03 0.03
Detailed 0.02 0.03 0.01 0.02
None 0.06 0.10 0.08 0.08
_rV_ Simplified 0.04 0.06 0.10 0.06
Detailed 0.03 0.02 0.03 0.02
None 0.04 0.07 0.05 0.05
_rO_ Simplified 0.01 0.04 0.08 0.04
Detailed 0.01 0.02 0.05 0.02
##### Table 88: Statistical results for StrategyQA using Qwen2-1.5B on correct responses.
#### reasoning tasks: AQuA (Figure 148, Table 96), GSM8K (Figure 149, Table 97), StrategyQA (Fig- ure 150, Table 98), ECQA (Figure 151, Table 99), CREAK (Figure 152, Table 100), Sensemaking (Figure 153, Table 101).
D.2.2 Wiki Tasks The visualizations and statistical results on Wiki tasks are shown in Figure 154 and Table 102.
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 5.76 4.13 3.49 4.42 sQ Simplified 0.89 0.52 0.77 0.69 Detailed 0.23 0.28 0.29 0.28 None 7.20 6.29 8.40 7.06 sK Simplified 1.01 0.56 1.11 0.81 Detailed 0.22 0.21 0.42 0.27 None 37.29 16.12 3.94 17.32 sV Simplified 5.08 2.14 0.86 2.36 Detailed 1.15 0.62 0.33 0.64 None 23.79 14.35 3.04 12.91 AQuA sO Simplified 3.31 2.18 0.63 1.97 Detailed 0.82 0.75 0.29 0.64 None 0.03 0.06 0.21 0.09 rQ Simplified 0.02 0.01 0.02 0.02 Detailed 0.01 0.01 0.02 0.01 None 0.04 0.04 0.14 0.06 rK Simplified 0.03 0.03 0.02 0.03 Detailed 0.02 0.03 0.01 0.02 None 0.04 0.06 0.03 0.05 rV Simplified 0.04 0.03 0.03 0.03 Detailed 0.03 0.02 0.03 0.02 None 0.02 0.04 0.09 0.05 rO Simplified 0.02 0.02 0.08 0.04 Detailed 0.01 0.02 0.06 0.03||||
||sK|None Simplified Detailed|7.20 6.29 8.40 7.06 1.01 0.56 1.11 0.81 0.22 0.21 0.42 0.27|
||sV|None Simplified Detailed|37.29 16.12 3.94 17.32 5.08 2.14 0.86 2.36 1.15 0.62 0.33 0.64|
||sO|None Simplified Detailed|23.79 14.35 3.04 12.91 3.31 2.18 0.63 1.97 0.82 0.75 0.29 0.64|
||rQ|None Simplified Detailed|0.03 0.06 0.21 0.09 0.02 0.01 0.02 0.02 0.01 0.01 0.02 0.01|
||rK|None Simplified Detailed|0.04 0.04 0.14 0.06 0.03 0.03 0.02 0.03 0.02 0.03 0.01 0.02|
||rV|None Simplified Detailed|0.04 0.06 0.03 0.05 0.04 0.03 0.03 0.03 0.03 0.02 0.03 0.02|
||rO|None Simplified Detailed|0.02 0.04 0.09 0.05 0.02 0.02 0.08 0.04 0.01 0.02 0.06 0.03|
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 3.03 1.74 0.90 1.87 sQ Simplified 0.58 0.32 0.40 0.41 Detailed 0.27 0.33 0.27 0.32 None 5.11 3.58 1.88 3.57 sK Simplified 0.73 0.39 0.45 0.48 Detailed 0.23 0.22 0.49 0.28 None 28.52 10.02 3.49 12.07 sV Simplified 4.51 1.53 1.05 2.04 Detailed 1.57 0.63 0.47 0.77 None 20.65 6.22 4.33 8.83 StrategyQA sO Simplified 3.06 1.46 0.94 1.65 Detailed 1.00 0.90 0.52 0.81 None 0.04 0.07 0.07 0.06 rQ Simplified 0.01 0.01 0.02 0.01 Detailed 0.01 0.01 0.01 0.01 None 0.05 0.04 0.09 0.05 rK Simplified 0.03 0.03 0.03 0.03 Detailed 0.02 0.03 0.01 0.02 None 0.06 0.10 0.08 0.08 rV Simplified 0.04 0.06 0.10 0.06 Detailed 0.03 0.02 0.03 0.02 None 0.04 0.07 0.05 0.05 rO Simplified 0.01 0.04 0.08 0.04 Detailed 0.01 0.02 0.05 0.02|sQ|None Simplified Detailed|3.03 1.74 0.90 1.87 0.58 0.32 0.40 0.41 0.27 0.33 0.27 0.32|
||sK|None Simplified Detailed|5.11 3.58 1.88 3.57 0.73 0.39 0.45 0.48 0.23 0.22 0.49 0.28|
||sV|None Simplified Detailed|28.52 10.02 3.49 12.07 4.51 1.53 1.05 2.04 1.57 0.63 0.47 0.77|
||sO|None Simplified Detailed|20.65 6.22 4.33 8.83 3.06 1.46 0.94 1.65 1.00 0.90 0.52 0.81|
||rQ|None Simplified Detailed|0.04 0.07 0.07 0.06 0.01 0.01 0.02 0.01 0.01 0.01 0.01 0.01|
||rK|None Simplified Detailed|0.05 0.04 0.09 0.05 0.03 0.03 0.03 0.03 0.02 0.03 0.01 0.02|
||rV|None Simplified Detailed|0.06 0.10 0.08 0.08 0.04 0.06 0.10 0.06 0.03 0.02 0.03 0.02|
||rO|None Simplified Detailed|0.04 0.07 0.05 0.05 0.01 0.04 0.08 0.04 0.01 0.02 0.05 0.02|
-----
##### Figure 98: Visualization for AQuA using Qwen2-1.5B on correct responses.
Figure 99: Visualization for GSM8K using Qwen2-1.5B on correct responses.
-----
##### Figure 100: Visualization for StrategyQA using Qwen2-1.5B on correct responses.
Figure 101: Visualization for ECQA using Qwen2-1.5B on correct responses.
-----
##### Figure 102: Visualization for CREAK using Qwen2-1.5B on correct responses.
Figure 103: Visualization for Sensemaking using Qwen2-1.5B on correct responses.
-----
##### Figure 104: Visualization for Wiki tasks using Qwen2-1.5B on correct responses.
-----
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 4.76 3.42 5.92 4.28 sQ Simplified 2.22 1.50 2.73 1.98 Detailed 0.31 0.35 0.38 0.37 None 6.84 4.71 11.16 6.59 sK Simplified 2.90 2.06 2.50 2.27 Detailed 0.26 0.35 0.71 0.41 None 38.73 13.42 5.15 16.68 sV Simplified 15.87 7.58 3.19 7.95 Detailed 1.52 0.75 0.40 0.80 None 25.32 12.11 2.65 12.44 Sensemaking sO Simplified 10.40 6.41 2.45 6.16 Detailed 1.06 0.88 0.43 0.80 None 0.04 0.06 0.20 0.09 rQ Simplified 0.02 0.02 0.10 0.04 Detailed 0.01 0.01 0.02 0.01 None 0.03 0.07 0.12 0.07 rK Simplified 0.03 0.05 0.02 0.04 Detailed 0.02 0.04 0.03 0.03 None 0.04 0.06 0.07 0.05 rV Simplified 0.04 0.05 0.08 0.05 Detailed 0.03 0.02 0.03 0.02 None 0.03 0.05 0.11 0.06 rO Simplified 0.01 0.03 0.08 0.04 Detailed 0.01 0.02 0.05 0.02|sQ|None Simplified Detailed|4.76 3.42 5.92 4.28 2.22 1.50 2.73 1.98 0.31 0.35 0.38 0.37|
||sK|None Simplified Detailed|6.84 4.71 11.16 6.59 2.90 2.06 2.50 2.27 0.26 0.35 0.71 0.41|
||sV|None Simplified Detailed|38.73 13.42 5.15 16.68 15.87 7.58 3.19 7.95 1.52 0.75 0.40 0.80|
||sO|None Simplified Detailed|25.32 12.11 2.65 12.44 10.40 6.41 2.45 6.16 1.06 0.88 0.43 0.80|
||rQ|None Simplified Detailed|0.04 0.06 0.20 0.09 0.02 0.02 0.10 0.04 0.01 0.01 0.02 0.01|
||rK|None Simplified Detailed|0.03 0.07 0.12 0.07 0.03 0.05 0.02 0.04 0.02 0.04 0.03 0.03|
||rV|None Simplified Detailed|0.04 0.06 0.07 0.05 0.04 0.05 0.08 0.05 0.03 0.02 0.03 0.02|
||rO|None Simplified Detailed|0.03 0.05 0.11 0.06 0.01 0.03 0.08 0.04 0.01 0.02 0.05 0.02|
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 8.00 7.01 5.01 6.53 sQ Simplified 1.11 0.70 0.86 0.85 Detailed 0.30 0.37 0.26 0.35 None 11.51 11.07 13.32 11.11 sK Simplified 1.34 1.24 1.01 1.13 Detailed 0.26 0.29 0.54 0.34 None 59.33 24.83 7.46 27.40 sV Simplified 8.53 3.55 1.66 4.01 Detailed 1.56 0.74 0.48 0.82 None 39.20 19.50 5.12 19.38 ECQA sO Simplified 5.56 3.33 1.41 3.22 Detailed 1.00 0.97 0.52 0.85 None 0.02 0.07 0.14 0.08 rQ Simplified 0.02 0.02 0.05 0.03 Detailed 0.01 0.01 0.01 0.01 None 0.06 0.04 0.06 0.05 rK Simplified 0.03 0.04 0.04 0.04 Detailed 0.02 0.03 0.02 0.02 None 0.04 0.07 0.05 0.05 rV Simplified 0.03 0.05 0.05 0.04 Detailed 0.02 0.02 0.03 0.02 None 0.02 0.05 0.11 0.06 rO Simplified 0.02 0.03 0.07 0.04 Detailed 0.01 0.02 0.05 0.02|sQ|None Simplified Detailed|8.00 7.01 5.01 6.53 1.11 0.70 0.86 0.85 0.30 0.37 0.26 0.35|
||sK|None Simplified Detailed|11.51 11.07 13.32 11.11 1.34 1.24 1.01 1.13 0.26 0.29 0.54 0.34|
||sV|None Simplified Detailed|59.33 24.83 7.46 27.40 8.53 3.55 1.66 4.01 1.56 0.74 0.48 0.82|
||sO|None Simplified Detailed|39.20 19.50 5.12 19.38 5.56 3.33 1.41 3.22 1.00 0.97 0.52 0.85|
||rQ|None Simplified Detailed|0.02 0.07 0.14 0.08 0.02 0.02 0.05 0.03 0.01 0.01 0.01 0.01|
||rK|None Simplified Detailed|0.06 0.04 0.06 0.05 0.03 0.04 0.04 0.04 0.02 0.03 0.02 0.02|
||rV|None Simplified Detailed|0.04 0.07 0.05 0.05 0.03 0.05 0.05 0.04 0.02 0.02 0.03 0.02|
||rO|None Simplified Detailed|0.02 0.05 0.11 0.06 0.02 0.03 0.07 0.04 0.01 0.02 0.05 0.02|
##### Table 89: Statistical results for ECQA using Qwen2- 1.5B on correct responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
None 13.91 9.73 8.08 9.73
_sQ_ Simplified 1.77 1.26 2.74 1.78
Detailed 0.31 0.33 0.25 0.33
None 20.31 15.99 17.98 16.92
_sK_ Simplified 2.37 1.89 2.54 2.08
Detailed 0.28 0.26 0.51 0.32
None 111.18 40.86 12.03 47.98
_sV_ Simplified 14.29 6.89 3.59 7.41
Detailed 1.80 0.81 0.54 0.92
None 71.70 36.08 4.27 35.01
CREAK _sO_ Simplified 9.03 6.26 2.93 5.88
Detailed 1.14 1.07 0.55 0.93
None 0.02 0.06 0.12 0.07
_rQ_ Simplified 0.02 0.03 0.10 0.04
Detailed 0.01 0.01 0.01 0.01
None 0.04 0.05 0.05 0.05
_rK_ Simplified 0.04 0.04 0.03 0.04
Detailed 0.02 0.03 0.01 0.03
None 0.04 0.07 0.05 0.06
_rV_ Simplified 0.03 0.06 0.09 0.06
Detailed 0.02 0.03 0.04 0.03
None 0.02 0.06 0.08 0.05
_rO_ Simplified 0.02 0.03 0.08 0.04
Detailed 0.01 0.02 0.05 0.02
##### Table 90: Statistical results for CREAK using Qwen2- 1.5B on correct responses.
##### Table 91: Statistical results for Sensemaking using Qwen2-1.5B on correct responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
Len 100 0.40 0.28 0.30 0.32
_sQ_ Len 500 0.31 0.26 0.21 0.27
Len 1000 0.28 0.25 0.18 0.25
Unpopular 0.63 0.44 0.19 0.43
Len 100 0.47 0.20 0.33 0.31
_sK_ Len 500 0.29 0.11 0.29 0.19
Len 1000 0.23 0.10 0.31 0.17
Unpopular 0.76 0.48 0.41 0.52
Len 100 3.26 1.30 0.89 1.63
_sV_ Len 500 2.10 0.87 0.67 1.08
Len 1000 1.67 0.65 0.52 0.84
Unpopular 5.49 1.90 0.99 2.41
Len 100 2.21 1.28 0.73 1.34
_sO_ Len 500 1.48 1.06 0.61 1.01
Wiki Len 1000 1.24 0.93 0.52 0.87
Unpopular 3.58 1.74 0.73 1.86
Len 100 0.01 0.01 0.03 0.02
_rQ_ Len 500 0.01 0.01 0.01 0.01
Len 1000 0.01 0.01 0.01 0.01
Unpopular 0.03 0.01 0.06 0.03
Len 100 0.03 0.03 0.03 0.03
_rK_ Len 500 0.02 0.02 0.02 0.02
Len 1000 0.02 0.02 0.01 0.02
Unpopular 0.04 0.02 0.03 0.03
Len 100 0.03 0.05 0.09 0.06
_rV_ Len 500 0.02 0.03 0.06 0.03
Len 1000 0.02 0.03 0.04 0.03
Unpopular 0.04 0.08 0.07 0.06
Len 100 0.02 0.03 0.07 0.04
_rO_ Len 500 0.01 0.02 0.05 0.03
Len 1000 0.01 0.02 0.05 0.02
Unpopular 0.02 0.05 0.08 0.05
##### Table 92: Statistical results for Wiki using Qwen2-1.5B on correct responses.
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 13.91 9.73 8.08 9.73 sQ Simplified 1.77 1.26 2.74 1.78 Detailed 0.31 0.33 0.25 0.33 None 20.31 15.99 17.98 16.92 sK Simplified 2.37 1.89 2.54 2.08 Detailed 0.28 0.26 0.51 0.32 None 111.18 40.86 12.03 47.98 sV Simplified 14.29 6.89 3.59 7.41 Detailed 1.80 0.81 0.54 0.92 None 71.70 36.08 4.27 35.01 CREAK sO Simplified 9.03 6.26 2.93 5.88 Detailed 1.14 1.07 0.55 0.93 None 0.02 0.06 0.12 0.07 rQ Simplified 0.02 0.03 0.10 0.04 Detailed 0.01 0.01 0.01 0.01 None 0.04 0.05 0.05 0.05 rK Simplified 0.04 0.04 0.03 0.04 Detailed 0.02 0.03 0.01 0.03 None 0.04 0.07 0.05 0.06 rV Simplified 0.03 0.06 0.09 0.06 Detailed 0.02 0.03 0.04 0.03 None 0.02 0.06 0.08 0.05 rO Simplified 0.02 0.03 0.08 0.04 Detailed 0.01 0.02 0.05 0.02|sQ|None Simplified Detailed|13.91 9.73 8.08 9.73 1.77 1.26 2.74 1.78 0.31 0.33 0.25 0.33|
||sK|None Simplified Detailed|20.31 15.99 17.98 16.92 2.37 1.89 2.54 2.08 0.28 0.26 0.51 0.32|
||sV|None Simplified Detailed|111.18 40.86 12.03 47.98 14.29 6.89 3.59 7.41 1.80 0.81 0.54 0.92|
||sO|None Simplified Detailed|71.70 36.08 4.27 35.01 9.03 6.26 2.93 5.88 1.14 1.07 0.55 0.93|
||rQ|None Simplified Detailed|0.02 0.06 0.12 0.07 0.02 0.03 0.10 0.04 0.01 0.01 0.01 0.01|
||rK|None Simplified Detailed|0.04 0.05 0.05 0.05 0.04 0.04 0.03 0.04 0.02 0.03 0.01 0.03|
||rV|None Simplified Detailed|0.04 0.07 0.05 0.06 0.03 0.06 0.09 0.06 0.02 0.03 0.04 0.03|
||rO|None Simplified Detailed|0.02 0.06 0.08 0.05 0.02 0.03 0.08 0.04 0.01 0.02 0.05 0.02|
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|Len 100 0.40 0.28 0.30 0.32 sQ Len 500 0.31 0.26 0.21 0.27 Len 1000 0.28 0.25 0.18 0.25 Unpopular 0.63 0.44 0.19 0.43 Len 100 0.47 0.20 0.33 0.31 sK Len 500 0.29 0.11 0.29 0.19 Len 1000 0.23 0.10 0.31 0.17 Unpopular 0.76 0.48 0.41 0.52 Len 100 3.26 1.30 0.89 1.63 sV Len 500 2.10 0.87 0.67 1.08 Len 1000 1.67 0.65 0.52 0.84 Unpopular 5.49 1.90 0.99 2.41 Len 100 2.21 1.28 0.73 1.34 sO Len 500 1.48 1.06 0.61 1.01 Wiki Len 1000 1.24 0.93 0.52 0.87 Unpopular 3.58 1.74 0.73 1.86 Len 100 0.01 0.01 0.03 0.02 rQ Len 500 0.01 0.01 0.01 0.01 Len 1000 0.01 0.01 0.01 0.01 Unpopular 0.03 0.01 0.06 0.03 Len 100 0.03 0.03 0.03 0.03 rK Len 500 0.02 0.02 0.02 0.02 Len 1000 0.02 0.02 0.01 0.02 Unpopular 0.04 0.02 0.03 0.03 Len 100 0.03 0.05 0.09 0.06 rV Len 500 0.02 0.03 0.06 0.03 Len 1000 0.02 0.03 0.04 0.03 Unpopular 0.04 0.08 0.07 0.06 Len 100 0.02 0.03 0.07 0.04 rO Len 500 0.01 0.02 0.05 0.03 Len 1000 0.01 0.02 0.05 0.02 Unpopular 0.02 0.05 0.08 0.05|sQ|Len 100 Len 500 Len 1000 Unpopular|0.40 0.28 0.30 0.32 0.31 0.26 0.21 0.27 0.28 0.25 0.18 0.25 0.63 0.44 0.19 0.43|
||sK|Len 100 Len 500 Len 1000 Unpopular|0.47 0.20 0.33 0.31 0.29 0.11 0.29 0.19 0.23 0.10 0.31 0.17 0.76 0.48 0.41 0.52|
||sV|Len 100 Len 500 Len 1000 Unpopular|3.26 1.30 0.89 1.63 2.10 0.87 0.67 1.08 1.67 0.65 0.52 0.84 5.49 1.90 0.99 2.41|
||sO|Len 100 Len 500 Len 1000 Unpopular|2.21 1.28 0.73 1.34 1.48 1.06 0.61 1.01 1.24 0.93 0.52 0.87 3.58 1.74 0.73 1.86|
||rQ|Len 100 Len 500 Len 1000 Unpopular|0.01 0.01 0.03 0.02 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.03 0.01 0.06 0.03|
||rK|Len 100 Len 500 Len 1000 Unpopular|0.03 0.03 0.03 0.03 0.02 0.02 0.02 0.02 0.02 0.02 0.01 0.02 0.04 0.02 0.03 0.03|
||rV|Len 100 Len 500 Len 1000 Unpopular|0.03 0.05 0.09 0.06 0.02 0.03 0.06 0.03 0.02 0.03 0.04 0.03 0.04 0.08 0.07 0.06|
||rO|Len 100 Len 500 Len 1000 Unpopular|0.02 0.03 0.07 0.04 0.01 0.02 0.05 0.03 0.01 0.02 0.05 0.02 0.02 0.05 0.08 0.05|
-----
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|Simplified 0.65 0.67 1.16 0.80 sQ Detailed 0.39 0.39 0.48 0.42 Simplified 0.63 0.53 1.16 0.67 sK Detailed 0.36 0.25 0.54 0.34 Simplified 3.16 2.22 0.94 2.02 sV Detailed 1.66 0.98 0.49 0.96 Algebra Simplified 2.26 2.45 0.68 1.87 sO Detailed 1.24 1.21 0.37 0.96 Simplified 0.01 0.02 0.05 0.02 rQ Detailed 0.01 0.02 0.03 0.02 Simplified 0.01 0.03 0.02 0.03 rK Detailed 0.02 0.03 0.03 0.03 Simplified 0.03 0.04 0.04 0.03 rV Detailed 0.03 0.03 0.04 0.03 Simplified 0.01 0.02 0.06 0.03 rO Detailed 0.01 0.01 0.06 0.02|sQ|Simplified Detailed|0.65 0.67 1.16 0.80 0.39 0.39 0.48 0.42|
||sK|Simplified Detailed|0.63 0.53 1.16 0.67 0.36 0.25 0.54 0.34|
||sV|Simplified Detailed|3.16 2.22 0.94 2.02 1.66 0.98 0.49 0.96|
||sO|Simplified Detailed|2.26 2.45 0.68 1.87 1.24 1.21 0.37 0.96|
||rQ|Simplified Detailed|0.01 0.02 0.05 0.02 0.01 0.02 0.03 0.02|
||rK|Simplified Detailed|0.01 0.03 0.02 0.03 0.02 0.03 0.03 0.03|
||rV|Simplified Detailed|0.03 0.04 0.04 0.03 0.03 0.03 0.04 0.03|
||rO|Simplified Detailed|0.01 0.02 0.06 0.03 0.01 0.01 0.06 0.02|
##### Table 93: Statistical results for MATH-Algebra using Qwen2-1.5B on irrelevant responses.
Figure 105: Visualization for MATH-Algebra using Qwen2-1.5B on irrelevant responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
Simplified 0.67 0.66 0.89 0.73
_sQ_ Detailed 0.46 0.41 0.38 0.42
Simplified 0.64 0.42 0.92 0.56
_sK_ Detailed 0.41 0.25 0.50 0.34
Simplified 3.20 2.08 0.98 1.98
_sV_ Detailed 1.94 0.93 0.52 1.02
Counting Simplified 2.36 2.25 0.77 1.81
_sO_ Detailed 1.43 1.21 0.45 1.01
Simplified 0.01 0.02 0.04 0.02
_rQ_ Detailed 0.01 0.02 0.03 0.02
Simplified 0.01 0.03 0.02 0.02
_rK_ Detailed 0.02 0.03 0.02 0.03
Simplified 0.03 0.04 0.04 0.04
_rV_ Detailed 0.03 0.03 0.03 0.03
Simplified 0.02 0.02 0.06 0.03
_rO_ Detailed 0.01 0.01 0.05 0.02
##### Table 94: Statistical results for MATH-Counting using Qwen2-1.5B on irrelevant responses.
##### Figure 106: Visualization for MATH-Counting using Qwen2-1.5B on irrelevant responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
Simplified 0.64 0.64 1.28 0.80
_sQ_ Detailed 0.54 0.32 0.57 0.43
Simplified 0.58 0.45 1.32 0.65
_sK_ Detailed 0.46 0.18 0.70 0.37
Simplified 3.22 1.91 0.97 1.88
_sV_ Detailed 2.43 1.00 0.54 1.18
Geometry Simplified 2.53 2.40 0.65 1.87
_sO_ Detailed 1.80 1.40 0.47 1.19
Simplified 0.01 0.01 0.04 0.02
_rQ_ Detailed 0.01 0.02 0.02 0.02
Simplified 0.01 0.03 0.02 0.02
_rK_ Detailed 0.02 0.03 0.03 0.03
Simplified 0.02 0.03 0.03 0.03
_rV_ Detailed 0.03 0.03 0.04 0.03
Simplified 0.01 0.01 0.05 0.02
_rO_ Detailed 0.01 0.01 0.04 0.02
##### Table 95: Statistical results for MATH-Geometry using Qwen2-1.5B on irrelevant responses.
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|Simplified 0.64 0.64 1.28 0.80 sQ Detailed 0.54 0.32 0.57 0.43 Simplified 0.58 0.45 1.32 0.65 sK Detailed 0.46 0.18 0.70 0.37 Simplified 3.22 1.91 0.97 1.88 sV Detailed 2.43 1.00 0.54 1.18 Geometry Simplified 2.53 2.40 0.65 1.87 sO Detailed 1.80 1.40 0.47 1.19 Simplified 0.01 0.01 0.04 0.02 rQ Detailed 0.01 0.02 0.02 0.02 Simplified 0.01 0.03 0.02 0.02 rK Detailed 0.02 0.03 0.03 0.03 Simplified 0.02 0.03 0.03 0.03 rV Detailed 0.03 0.03 0.04 0.03 Simplified 0.01 0.01 0.05 0.02 rO Detailed 0.01 0.01 0.04 0.02|sQ|Simplified Detailed|0.64 0.64 1.28 0.80 0.54 0.32 0.57 0.43|
||sK|Simplified Detailed|0.58 0.45 1.32 0.65 0.46 0.18 0.70 0.37|
||sO|Simplified Detailed|2.53 2.40 0.65 1.87 1.80 1.40 0.47 1.19|
||rQ|Simplified Detailed|0.01 0.01 0.04 0.02 0.01 0.02 0.02 0.02|
||rK|Simplified Detailed|0.01 0.03 0.02 0.02 0.02 0.03 0.03 0.03|
||rV|Simplified Detailed|0.02 0.03 0.03 0.03 0.03 0.03 0.04 0.03|
||rO|Simplified Detailed|0.01 0.01 0.05 0.02 0.01 0.01 0.04 0.02|
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|Simplified 0.67 0.66 0.89 0.73 sQ Detailed 0.46 0.41 0.38 0.42 Simplified 0.64 0.42 0.92 0.56 sK Detailed 0.41 0.25 0.50 0.34 Simplified 3.20 2.08 0.98 1.98 sV Detailed 1.94 0.93 0.52 1.02 Counting Simplified 2.36 2.25 0.77 1.81 sO Detailed 1.43 1.21 0.45 1.01 Simplified 0.01 0.02 0.04 0.02 rQ Detailed 0.01 0.02 0.03 0.02 Simplified 0.01 0.03 0.02 0.02 rK Detailed 0.02 0.03 0.02 0.03 Simplified 0.03 0.04 0.04 0.04 rV Detailed 0.03 0.03 0.03 0.03 Simplified 0.02 0.02 0.06 0.03 rO Detailed 0.01 0.01 0.05 0.02|sQ|Simplified Detailed|0.67 0.66 0.89 0.73 0.46 0.41 0.38 0.42|
||sK|Simplified Detailed|0.64 0.42 0.92 0.56 0.41 0.25 0.50 0.34|
||sV|Simplified Detailed|3.20 2.08 0.98 1.98 1.94 0.93 0.52 1.02|
||sO|Simplified Detailed|2.36 2.25 0.77 1.81 1.43 1.21 0.45 1.01|
||rQ|Simplified Detailed|0.01 0.02 0.04 0.02 0.01 0.02 0.03 0.02|
||rK|Simplified Detailed|0.01 0.03 0.02 0.02 0.02 0.03 0.02 0.03|
||rV|Simplified Detailed|0.03 0.04 0.04 0.04 0.03 0.03 0.03 0.03|
||rO|Simplified Detailed|0.02 0.02 0.06 0.03 0.01 0.01 0.05 0.02|
-----
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 3.34 2.96 7.10 4.14 sQ Simplified 0.85 0.62 1.12 0.85 Detailed 0.47 0.37 0.40 0.42 None 4.41 5.64 9.89 6.21 sK Simplified 0.91 0.84 1.29 0.94 Detailed 0.43 0.30 0.54 0.38 None 23.08 15.01 5.51 13.78 sV Simplified 4.56 3.05 1.30 2.87 Detailed 2.31 1.26 0.64 1.30 None 14.64 14.57 3.60 11.53 GSM8K sO Simplified 3.12 3.09 0.96 2.47 Detailed 1.54 1.43 0.47 1.16 None 0.02 0.04 0.13 0.05 rQ Simplified 0.02 0.02 0.04 0.02 Detailed 0.01 0.02 0.04 0.02 None 0.04 0.04 0.04 0.04 rK Simplified 0.03 0.03 0.02 0.03 Detailed 0.02 0.04 0.02 0.03 None 0.03 0.06 0.04 0.05 rV Simplified 0.03 0.05 0.05 0.04 Detailed 0.02 0.04 0.03 0.03 None 0.03 0.05 0.09 0.05 rO Simplified 0.02 0.02 0.07 0.03 Detailed 0.01 0.01 0.05 0.02|sQ|None Simplified Detailed|3.34 2.96 7.10 4.14 0.85 0.62 1.12 0.85 0.47 0.37 0.40 0.42|
||sK|None Simplified Detailed|4.41 5.64 9.89 6.21 0.91 0.84 1.29 0.94 0.43 0.30 0.54 0.38|
||sV|None Simplified Detailed|23.08 15.01 5.51 13.78 4.56 3.05 1.30 2.87 2.31 1.26 0.64 1.30|
||sO|None Simplified Detailed|14.64 14.57 3.60 11.53 3.12 3.09 0.96 2.47 1.54 1.43 0.47 1.16|
||rQ|None Simplified Detailed|0.02 0.04 0.13 0.05 0.02 0.02 0.04 0.02 0.01 0.02 0.04 0.02|
||rK|None Simplified Detailed|0.04 0.04 0.04 0.04 0.03 0.03 0.02 0.03 0.02 0.04 0.02 0.03|
||rV|None Simplified Detailed|0.03 0.06 0.04 0.05 0.03 0.05 0.05 0.04 0.02 0.04 0.03 0.03|
||rO|None Simplified Detailed|0.03 0.05 0.09 0.05 0.02 0.02 0.07 0.03 0.01 0.01 0.05 0.02|
##### Figure 107: Visualization for MATH-Geometry using Qwen2-1.5B on irrelevant responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
None 5.79 4.22 3.47 4.46
_sQ_ Simplified 1.09 0.67 1.33 0.95
Detailed 0.53 0.40 0.43 0.45
None 7.28 6.31 8.41 7.10
_sK_ Simplified 1.12 0.69 1.58 0.97
Detailed 0.50 0.28 0.55 0.39
None 37.62 16.30 4.00 17.49
_sV_ Simplified 6.56 3.14 1.47 3.36
Detailed 2.53 1.26 0.73 1.35
None 23.99 14.51 3.06 13.04
AQuA _sO_ Simplified 4.42 3.22 1.07 2.83
Detailed 1.81 1.48 0.54 1.25
None 0.03 0.06 0.21 0.09
_rQ_ Simplified 0.02 0.02 0.03 0.02
Detailed 0.01 0.01 0.03 0.02
None 0.04 0.04 0.14 0.06
_rK_ Simplified 0.03 0.03 0.02 0.03
Detailed 0.02 0.03 0.02 0.03
None 0.04 0.06 0.03 0.05
_rV_ Simplified 0.03 0.05 0.05 0.04
Detailed 0.03 0.04 0.04 0.03
None 0.02 0.04 0.09 0.05
_rO_ Simplified 0.02 0.02 0.08 0.04
Detailed 0.01 0.01 0.06 0.02
##### Table 96: Statistical results for AQuA using Qwen2- 1.5B on irrelevant responses.
##### Table 97: Statistical results for GSM8K using Qwen2- 1.5B on irrelevant responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
None 3.15 1.80 0.89 1.93
_sQ_ Simplified 0.81 0.49 0.42 0.56
Detailed 0.69 0.35 0.25 0.40
None 5.22 3.66 1.87 3.64
_sK_ Simplified 1.04 0.52 0.44 0.66
Detailed 0.55 0.30 0.40 0.37
None 28.90 10.12 3.51 12.21
_sV_ Simplified 6.18 1.80 1.39 2.67
Detailed 3.17 0.95 0.80 1.40
None 21.00 6.26 4.35 8.94
StrategyQA _sO_ Simplified 4.29 1.92 1.19 2.24
Detailed 2.29 1.41 0.75 1.40
None 0.04 0.07 0.06 0.06
_rQ_ Simplified 0.01 0.02 0.02 0.02
Detailed 0.02 0.02 0.01 0.01
None 0.05 0.04 0.09 0.05
_rK_ Simplified 0.04 0.03 0.02 0.03
Detailed 0.02 0.03 0.02 0.03
None 0.06 0.10 0.08 0.08
_rV_ Simplified 0.04 0.08 0.13 0.08
Detailed 0.03 0.04 0.06 0.04
None 0.04 0.07 0.05 0.05
_rO_ Simplified 0.02 0.03 0.08 0.04
Detailed 0.01 0.01 0.05 0.02
##### Table 98: Statistical results for StrategyQA using Qwen2-1.5B on irrelevant responses.
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 5.79 4.22 3.47 4.46 sQ Simplified 1.09 0.67 1.33 0.95 Detailed 0.53 0.40 0.43 0.45 None 7.28 6.31 8.41 7.10 sK Simplified 1.12 0.69 1.58 0.97 Detailed 0.50 0.28 0.55 0.39 None 37.62 16.30 4.00 17.49 sV Simplified 6.56 3.14 1.47 3.36 Detailed 2.53 1.26 0.73 1.35 None 23.99 14.51 3.06 13.04 AQuA sO Simplified 4.42 3.22 1.07 2.83 Detailed 1.81 1.48 0.54 1.25 None 0.03 0.06 0.21 0.09 rQ Simplified 0.02 0.02 0.03 0.02 Detailed 0.01 0.01 0.03 0.02 None 0.04 0.04 0.14 0.06 rK Simplified 0.03 0.03 0.02 0.03 Detailed 0.02 0.03 0.02 0.03 None 0.04 0.06 0.03 0.05 rV Simplified 0.03 0.05 0.05 0.04 Detailed 0.03 0.04 0.04 0.03 None 0.02 0.04 0.09 0.05 rO Simplified 0.02 0.02 0.08 0.04 Detailed 0.01 0.01 0.06 0.02|sQ|None Simplified Detailed|5.79 4.22 3.47 4.46 1.09 0.67 1.33 0.95 0.53 0.40 0.43 0.45|
||sK|None Simplified Detailed|7.28 6.31 8.41 7.10 1.12 0.69 1.58 0.97 0.50 0.28 0.55 0.39|
||sV|None Simplified Detailed|37.62 16.30 4.00 17.49 6.56 3.14 1.47 3.36 2.53 1.26 0.73 1.35|
||sO|None Simplified Detailed|23.99 14.51 3.06 13.04 4.42 3.22 1.07 2.83 1.81 1.48 0.54 1.25|
||rQ|None Simplified Detailed|0.03 0.06 0.21 0.09 0.02 0.02 0.03 0.02 0.01 0.01 0.03 0.02|
||rK|None Simplified Detailed|0.04 0.04 0.14 0.06 0.03 0.03 0.02 0.03 0.02 0.03 0.02 0.03|
||rV|None Simplified Detailed|0.04 0.06 0.03 0.05 0.03 0.05 0.05 0.04 0.03 0.04 0.04 0.03|
||rO|None Simplified Detailed|0.02 0.04 0.09 0.05 0.02 0.02 0.08 0.04 0.01 0.01 0.06 0.02|
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 3.15 1.80 0.89 1.93 sQ Simplified 0.81 0.49 0.42 0.56 Detailed 0.69 0.35 0.25 0.40 None 5.22 3.66 1.87 3.64 sK Simplified 1.04 0.52 0.44 0.66 Detailed 0.55 0.30 0.40 0.37 None 28.90 10.12 3.51 12.21 sV Simplified 6.18 1.80 1.39 2.67 Detailed 3.17 0.95 0.80 1.40 None 21.00 6.26 4.35 8.94 StrategyQA sO Simplified 4.29 1.92 1.19 2.24 Detailed 2.29 1.41 0.75 1.40 None 0.04 0.07 0.06 0.06 rQ Simplified 0.01 0.02 0.02 0.02 Detailed 0.02 0.02 0.01 0.01 None 0.05 0.04 0.09 0.05 rK Simplified 0.04 0.03 0.02 0.03 Detailed 0.02 0.03 0.02 0.03 None 0.06 0.10 0.08 0.08 rV Simplified 0.04 0.08 0.13 0.08 Detailed 0.03 0.04 0.06 0.04 None 0.04 0.07 0.05 0.05 rO Simplified 0.02 0.03 0.08 0.04 Detailed 0.01 0.01 0.05 0.02|sQ|None Simplified Detailed|3.15 1.80 0.89 1.93 0.81 0.49 0.42 0.56 0.69 0.35 0.25 0.40|
||sK|None Simplified Detailed|5.22 3.66 1.87 3.64 1.04 0.52 0.44 0.66 0.55 0.30 0.40 0.37|
||sV|None Simplified Detailed|28.90 10.12 3.51 12.21 6.18 1.80 1.39 2.67 3.17 0.95 0.80 1.40|
||sO|None Simplified Detailed|21.00 6.26 4.35 8.94 4.29 1.92 1.19 2.24 2.29 1.41 0.75 1.40|
||rQ|None Simplified Detailed|0.04 0.07 0.06 0.06 0.01 0.02 0.02 0.02 0.02 0.02 0.01 0.01|
||rK|None Simplified Detailed|0.05 0.04 0.09 0.05 0.04 0.03 0.02 0.03 0.02 0.03 0.02 0.03|
||rV|None Simplified Detailed|0.06 0.10 0.08 0.08 0.04 0.08 0.13 0.08 0.03 0.04 0.06 0.04|
||rO|None Simplified Detailed|0.04 0.07 0.05 0.05 0.02 0.03 0.08 0.04 0.01 0.01 0.05 0.02|
-----
##### Figure 108: Visualization for AQuA using Qwen2-1.5B on irrelevant responses.
Figure 109: Visualization for GSM8K using Qwen2-1.5B on irrelevant responses.
-----
##### Figure 110: Visualization for StrategyQA using Qwen2-1.5B on irrelevant responses.
Figure 111: Visualization for ECQA using Qwen2-1.5B on irrelevant responses.
-----
##### Figure 112: Visualization for CREAK using Qwen2-1.5B on irrelevant responses.
Figure 113: Visualization for Sensemaking using Qwen2-1.5B on irrelevant responses.
-----
##### Figure 114: Visualization for Wiki tasks using Qwen2-1.5B on irrelevant responses.
-----
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 5.14 3.36 5.88 4.39 sQ Simplified 1.86 1.29 2.41 1.73 Detailed 0.67 0.58 0.41 0.57 None 7.37 4.08 10.96 6.43 sK Simplified 2.35 1.62 2.64 1.93 Detailed 0.55 0.50 0.78 0.54 None 39.58 14.36 5.81 17.47 sV Simplified 13.80 5.90 2.85 6.67 Detailed 2.90 1.12 0.68 1.37 None 26.23 12.84 2.81 13.07 Sensemaking sO Simplified 9.13 5.58 2.46 5.46 Detailed 2.14 1.48 0.60 1.36 None 0.03 0.05 0.19 0.07 rQ Simplified 0.02 0.02 0.10 0.04 Detailed 0.01 0.02 0.03 0.02 None 0.03 0.06 0.11 0.06 rK Simplified 0.03 0.03 0.07 0.04 Detailed 0.02 0.05 0.02 0.04 None 0.04 0.05 0.06 0.05 rV Simplified 0.03 0.06 0.08 0.06 Detailed 0.02 0.03 0.03 0.03 None 0.03 0.04 0.10 0.05 rO Simplified 0.01 0.03 0.08 0.04 Detailed 0.01 0.02 0.04 0.02|sQ|None Simplified Detailed|5.14 3.36 5.88 4.39 1.86 1.29 2.41 1.73 0.67 0.58 0.41 0.57|
||sK|None Simplified Detailed|7.37 4.08 10.96 6.43 2.35 1.62 2.64 1.93 0.55 0.50 0.78 0.54|
||sV|None Simplified Detailed|39.58 14.36 5.81 17.47 13.80 5.90 2.85 6.67 2.90 1.12 0.68 1.37|
||sO|None Simplified Detailed|26.23 12.84 2.81 13.07 9.13 5.58 2.46 5.46 2.14 1.48 0.60 1.36|
||rQ|None Simplified Detailed|0.03 0.05 0.19 0.07 0.02 0.02 0.10 0.04 0.01 0.02 0.03 0.02|
||rK|None Simplified Detailed|0.03 0.06 0.11 0.06 0.03 0.03 0.07 0.04 0.02 0.05 0.02 0.04|
||rV|None Simplified Detailed|0.04 0.05 0.06 0.05 0.03 0.06 0.08 0.06 0.02 0.03 0.03 0.03|
||rO|None Simplified Detailed|0.03 0.04 0.10 0.05 0.01 0.03 0.08 0.04 0.01 0.02 0.04 0.02|
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 5.61 4.74 3.44 4.48 sQ Simplified 1.34 0.90 1.70 1.20 Detailed 0.64 0.42 0.29 0.45 None 8.65 6.92 6.53 6.98 sK Simplified 1.58 1.25 1.92 1.41 Detailed 0.56 0.32 0.56 0.43 None 45.35 16.32 6.82 20.02 sV Simplified 10.01 4.28 2.30 4.90 Detailed 3.07 1.13 0.79 1.44 None 29.13 14.21 5.63 14.91 ECQA sO Simplified 6.62 4.35 2.41 4.21 Detailed 2.11 1.51 0.80 1.41 None 0.02 0.04 0.07 0.04 rQ Simplified 0.01 0.01 0.12 0.04 Detailed 0.01 0.02 0.02 0.02 None 0.04 0.04 0.05 0.04 rK Simplified 0.03 0.04 0.04 0.04 Detailed 0.02 0.03 0.03 0.03 None 0.04 0.07 0.04 0.05 rV Simplified 0.03 0.07 0.08 0.06 Detailed 0.02 0.03 0.04 0.03 None 0.02 0.05 0.07 0.05 rO Simplified 0.02 0.03 0.07 0.04 Detailed 0.02 0.02 0.04 0.02|sQ|None Simplified Detailed|5.61 4.74 3.44 4.48 1.34 0.90 1.70 1.20 0.64 0.42 0.29 0.45|
||sK|None Simplified Detailed|8.65 6.92 6.53 6.98 1.58 1.25 1.92 1.41 0.56 0.32 0.56 0.43|
||sV|None Simplified Detailed|45.35 16.32 6.82 20.02 10.01 4.28 2.30 4.90 3.07 1.13 0.79 1.44|
||sO|None Simplified Detailed|29.13 14.21 5.63 14.91 6.62 4.35 2.41 4.21 2.11 1.51 0.80 1.41|
||rQ|None Simplified Detailed|0.02 0.04 0.07 0.04 0.01 0.01 0.12 0.04 0.01 0.02 0.02 0.02|
||rK|None Simplified Detailed|0.04 0.04 0.05 0.04 0.03 0.04 0.04 0.04 0.02 0.03 0.03 0.03|
||rV|None Simplified Detailed|0.04 0.07 0.04 0.05 0.03 0.07 0.08 0.06 0.02 0.03 0.04 0.03|
||rO|None Simplified Detailed|0.02 0.05 0.07 0.05 0.02 0.03 0.07 0.04 0.02 0.02 0.04 0.02|
##### Table 99: Statistical results for ECQA using Qwen2- 1.5B on irrelevant responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
None 13.97 9.47 8.58 9.75
_sQ_ Simplified 1.97 1.63 4.16 2.30
Detailed 0.70 0.37 0.20 0.42
None 20.17 15.47 18.95 16.82
_sK_ Simplified 2.36 2.12 4.41 2.56
Detailed 0.61 0.33 0.50 0.42
None 110.47 41.19 12.41 47.98
_sV_ Simplified 15.38 6.33 4.18 7.61
Detailed 3.76 1.38 0.91 1.75
None 70.90 36.52 4.43 35.11
CREAK _sO_ Simplified 9.95 6.47 3.95 6.43
Detailed 2.53 1.76 0.82 1.64
None 0.02 0.06 0.12 0.07
_rQ_ Simplified 0.01 0.01 0.11 0.03
Detailed 0.01 0.01 0.03 0.02
None 0.04 0.05 0.05 0.05
_rK_ Simplified 0.04 0.04 0.05 0.04
Detailed 0.02 0.03 0.03 0.03
None 0.04 0.07 0.05 0.06
_rV_ Simplified 0.03 0.08 0.12 0.07
Detailed 0.02 0.04 0.05 0.04
None 0.02 0.06 0.08 0.05
_rO_ Simplified 0.02 0.05 0.09 0.05
Detailed 0.02 0.02 0.05 0.02
##### Table 100: Statistical results for CREAK using Qwen2- 1.5B on irrelevant responses.
##### Table 101: Statistical results for Sensemaking using Qwen2-1.5B on irrelevant responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
Len 100 0.65 0.34 0.33 0.43
_sQ_ Len 500 0.56 0.26 0.27 0.34
Len 1000 0.49 0.23 0.25 0.29
Unpopular 0.63 0.46 0.30 0.45
Len 100 0.76 0.32 0.36 0.46
_sK_ Len 500 0.48 0.19 0.21 0.28
Len 1000 0.38 0.14 0.17 0.21
Unpopular 0.88 0.45 0.47 0.56
Len 100 4.83 1.52 1.18 2.17
_sV_ Len 500 3.35 1.06 0.94 1.54
Len 1000 2.75 0.83 0.79 1.25
Unpopular 6.99 1.80 1.14 2.78
Len 100 3.36 1.57 0.84 1.78
_sO_ Len 500 2.49 1.36 0.77 1.44
Wiki Len 1000 2.16 1.22 0.70 1.28
Unpopular 4.71 1.71 0.92 2.18
Len 100 0.02 0.02 0.03 0.02
_rQ_ Len 500 0.02 0.01 0.01 0.02
Len 1000 0.01 0.01 0.01 0.01
Unpopular 0.01 0.02 0.03 0.02
Len 100 0.03 0.03 0.02 0.03
_rK_ Len 500 0.03 0.02 0.01 0.02
Len 1000 0.02 0.02 0.01 0.02
Unpopular 0.04 0.03 0.02 0.03
Len 100 0.03 0.07 0.11 0.07
_rV_ Len 500 0.02 0.05 0.09 0.05
Len 1000 0.02 0.05 0.08 0.04
Unpopular 0.04 0.08 0.11 0.07
Len 100 0.02 0.03 0.07 0.03
_rO_ Len 500 0.02 0.01 0.06 0.02
Len 1000 0.01 0.01 0.05 0.02
Unpopular 0.02 0.04 0.08 0.04
##### Table 102: Statistical results for Wiki using Qwen2- 1.5B on irrelevant responses.
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 13.97 9.47 8.58 9.75 sQ Simplified 1.97 1.63 4.16 2.30 Detailed 0.70 0.37 0.20 0.42 None 20.17 15.47 18.95 16.82 sK Simplified 2.36 2.12 4.41 2.56 Detailed 0.61 0.33 0.50 0.42 None 110.47 41.19 12.41 47.98 sV Simplified 15.38 6.33 4.18 7.61 Detailed 3.76 1.38 0.91 1.75 None 70.90 36.52 4.43 35.11 CREAK sO Simplified 9.95 6.47 3.95 6.43 Detailed 2.53 1.76 0.82 1.64 None 0.02 0.06 0.12 0.07 rQ Simplified 0.01 0.01 0.11 0.03 Detailed 0.01 0.01 0.03 0.02 None 0.04 0.05 0.05 0.05 rK Simplified 0.04 0.04 0.05 0.04 Detailed 0.02 0.03 0.03 0.03 None 0.04 0.07 0.05 0.06 rV Simplified 0.03 0.08 0.12 0.07 Detailed 0.02 0.04 0.05 0.04 None 0.02 0.06 0.08 0.05 rO Simplified 0.02 0.05 0.09 0.05 Detailed 0.02 0.02 0.05 0.02|sQ|None Simplified Detailed|13.97 9.47 8.58 9.75 1.97 1.63 4.16 2.30 0.70 0.37 0.20 0.42|
||sK|None Simplified Detailed|20.17 15.47 18.95 16.82 2.36 2.12 4.41 2.56 0.61 0.33 0.50 0.42|
||sV|None Simplified Detailed|110.47 41.19 12.41 47.98 15.38 6.33 4.18 7.61 3.76 1.38 0.91 1.75|
||sO|None Simplified Detailed|70.90 36.52 4.43 35.11 9.95 6.47 3.95 6.43 2.53 1.76 0.82 1.64|
||rQ|None Simplified Detailed|0.02 0.06 0.12 0.07 0.01 0.01 0.11 0.03 0.01 0.01 0.03 0.02|
||rK|None Simplified Detailed|0.04 0.05 0.05 0.05 0.04 0.04 0.05 0.04 0.02 0.03 0.03 0.03|
||rV|None Simplified Detailed|0.04 0.07 0.05 0.06 0.03 0.08 0.12 0.07 0.02 0.04 0.05 0.04|
||rO|None Simplified Detailed|0.02 0.06 0.08 0.05 0.02 0.05 0.09 0.05 0.02 0.02 0.05 0.02|
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|Len 100 0.65 0.34 0.33 0.43 sQ Len 500 0.56 0.26 0.27 0.34 Len 1000 0.49 0.23 0.25 0.29 Unpopular 0.63 0.46 0.30 0.45 Len 100 0.76 0.32 0.36 0.46 sK Len 500 0.48 0.19 0.21 0.28 Len 1000 0.38 0.14 0.17 0.21 Unpopular 0.88 0.45 0.47 0.56 Len 100 4.83 1.52 1.18 2.17 sV Len 500 3.35 1.06 0.94 1.54 Len 1000 2.75 0.83 0.79 1.25 Unpopular 6.99 1.80 1.14 2.78 Len 100 3.36 1.57 0.84 1.78 sO Len 500 2.49 1.36 0.77 1.44 Wiki Len 1000 2.16 1.22 0.70 1.28 Unpopular 4.71 1.71 0.92 2.18 Len 100 0.02 0.02 0.03 0.02 rQ Len 500 0.02 0.01 0.01 0.02 Len 1000 0.01 0.01 0.01 0.01 Unpopular 0.01 0.02 0.03 0.02 Len 100 0.03 0.03 0.02 0.03 rK Len 500 0.03 0.02 0.01 0.02 Len 1000 0.02 0.02 0.01 0.02 Unpopular 0.04 0.03 0.02 0.03 Len 100 0.03 0.07 0.11 0.07 rV Len 500 0.02 0.05 0.09 0.05 Len 1000 0.02 0.05 0.08 0.04 Unpopular 0.04 0.08 0.11 0.07 Len 100 0.02 0.03 0.07 0.03 rO Len 500 0.02 0.01 0.06 0.02 Len 1000 0.01 0.01 0.05 0.02 Unpopular 0.02 0.04 0.08 0.04|sQ|Len 100 Len 500 Len 1000 Unpopular|0.65 0.34 0.33 0.43 0.56 0.26 0.27 0.34 0.49 0.23 0.25 0.29 0.63 0.46 0.30 0.45|
||sK|Len 100 Len 500 Len 1000 Unpopular|0.76 0.32 0.36 0.46 0.48 0.19 0.21 0.28 0.38 0.14 0.17 0.21 0.88 0.45 0.47 0.56|
||sV|Len 100 Len 500 Len 1000 Unpopular|4.83 1.52 1.18 2.17 3.35 1.06 0.94 1.54 2.75 0.83 0.79 1.25 6.99 1.80 1.14 2.78|
||sO|Len 100 Len 500 Len 1000 Unpopular|3.36 1.57 0.84 1.78 2.49 1.36 0.77 1.44 2.16 1.22 0.70 1.28 4.71 1.71 0.92 2.18|
||rQ|Len 100 Len 500 Len 1000 Unpopular|0.02 0.02 0.03 0.02 0.02 0.01 0.01 0.02 0.01 0.01 0.01 0.01 0.01 0.02 0.03 0.02|
||rK|Len 100 Len 500 Len 1000 Unpopular|0.03 0.03 0.02 0.03 0.03 0.02 0.01 0.02 0.02 0.02 0.01 0.02 0.04 0.03 0.02 0.03|
||rV|Len 100 Len 500 Len 1000 Unpopular|0.03 0.07 0.11 0.07 0.02 0.05 0.09 0.05 0.02 0.05 0.08 0.04 0.04 0.08 0.11 0.07|
||rO|Len 100 Len 500 Len 1000 Unpopular|0.02 0.03 0.07 0.03 0.02 0.01 0.06 0.02 0.01 0.01 0.05 0.02 0.02 0.04 0.08 0.04|
-----
#### D.3 Instructed LLM on Correct Responses
D.3.1 Reasoning Tasks
The visualizations and statistical results on MATH tasks: MATH-Algebra (Figure 155, Table 103), MATH-Counting (Figure 156, Table 104), MATH- Geometry (Figure 157, Table 105).
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
Simplified 0.40 0.38 0.52 0.43
_sQ_ Detailed 0.18 0.27 0.27 0.26
Simplified 0.40 0.35 0.59 0.42
_sK_ Detailed 0.17 0.20 0.39 0.24
Simplified 1.93 1.25 0.48 1.16
_sV_ Detailed 0.88 0.54 0.30 0.53
Algebra Simplified 1.40 1.34 0.43 1.09
_sO_ Detailed 0.66 0.67 0.28 0.55
Simplified 0.02 0.01 0.03 0.02
_rQ_ Detailed 0.01 0.01 0.03 0.01
Simplified 0.03 0.03 0.02 0.03
_rK_ Detailed 0.02 0.03 0.02 0.03
Simplified 0.03 0.03 0.04 0.03
_rV_ Detailed 0.02 0.02 0.03 0.02
Simplified 0.02 0.02 0.07 0.03
_rO_ Detailed 0.01 0.01 0.07 0.03
##### Table 103: Statistical results for MATH-Algebra using Qwen2-1.5B-Instruct on correct responses.
Figure 115: Visualization for MATH-Algebra using Qwen2-1.5B-Instruct on correct responses.
#### The visualizations and statistical results on other reasoning tasks: AQuA (Figure 158, Table 106), GSM8K (Figure 159, Table 107), StrategyQA (Fig- ure 160, Table 108), ECQA (Figure 161, Table
109), CREAK (Figure 162, Table 110), Sensemak- ing (Figure 163, Table 111).
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|Simplified 0.44 0.40 0.51 0.46 sQ Detailed 0.22 0.32 0.30 0.31 Simplified 0.42 0.30 0.59 0.40 sK Detailed 0.20 0.21 0.45 0.27 Simplified 2.05 1.28 0.55 1.21 sV Detailed 1.07 0.57 0.35 0.60 Counting Simplified 1.49 1.36 0.52 1.14 sO Detailed 0.79 0.73 0.37 0.64 Simplified 0.02 0.01 0.02 0.01 rQ Detailed 0.01 0.01 0.02 0.01 Simplified 0.02 0.03 0.02 0.02 rK Detailed 0.02 0.03 0.01 0.03 Simplified 0.03 0.03 0.04 0.03 rV Detailed 0.02 0.02 0.03 0.02 Simplified 0.02 0.02 0.07 0.03 rO Detailed 0.01 0.02 0.06 0.02|sQ|Simplified Detailed|0.44 0.40 0.51 0.46 0.22 0.32 0.30 0.31|
||sK|Simplified Detailed|0.42 0.30 0.59 0.40 0.20 0.21 0.45 0.27|
||sV|Simplified Detailed|2.05 1.28 0.55 1.21 1.07 0.57 0.35 0.60|
||sO|Simplified Detailed|1.49 1.36 0.52 1.14 0.79 0.73 0.37 0.64|
||rQ|Simplified Detailed|0.02 0.01 0.02 0.01 0.01 0.01 0.02 0.01|
||rK|Simplified Detailed|0.02 0.03 0.02 0.02 0.02 0.03 0.01 0.03|
||rV|Simplified Detailed|0.03 0.03 0.04 0.03 0.02 0.02 0.03 0.02|
||rO|Simplified Detailed|0.02 0.02 0.07 0.03 0.01 0.02 0.06 0.02|
##### Table 104: Statistical results for MATH-Counting using Qwen2-1.5B-Instruct on correct responses.
Figure 116: Visualization for MATH-Counting using Qwen2-1.5B-Instruct on correct responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
Simplified 0.40 0.44 0.57 0.47
_sQ_ Detailed 0.26 0.36 0.40 0.36
Simplified 0.35 0.28 0.72 0.40
_sK_ Detailed 0.23 0.24 0.51 0.30
Simplified 1.89 1.07 0.54 1.06
_sV_ Detailed 1.28 0.62 0.39 0.68
Geometry Simplified 1.52 1.35 0.42 1.11
_sO_ Detailed 0.97 0.86 0.36 0.73
Simplified 0.02 0.01 0.01 0.01
_rQ_ Detailed 0.01 0.01 0.03 0.01
Simplified 0.01 0.03 0.01 0.02
_rK_ Detailed 0.02 0.03 0.02 0.03
Simplified 0.03 0.02 0.03 0.02
_rV_ Detailed 0.02 0.02 0.03 0.02
Simplified 0.01 0.01 0.06 0.02
_rO_ Detailed 0.01 0.01 0.06 0.02
##### Table 105: Statistical results for MATH-Geometry using Qwen2-1.5B-Instruct on correct responses.
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|Simplified 0.40 0.38 0.52 0.43 sQ Detailed 0.18 0.27 0.27 0.26 Simplified 0.40 0.35 0.59 0.42 sK Detailed 0.17 0.20 0.39 0.24 Simplified 1.93 1.25 0.48 1.16 sV Detailed 0.88 0.54 0.30 0.53 Algebra Simplified 1.40 1.34 0.43 1.09 sO Detailed 0.66 0.67 0.28 0.55 Simplified 0.02 0.01 0.03 0.02 rQ Detailed 0.01 0.01 0.03 0.01 Simplified 0.03 0.03 0.02 0.03 rK Detailed 0.02 0.03 0.02 0.03 Simplified 0.03 0.03 0.04 0.03 rV Detailed 0.02 0.02 0.03 0.02 Simplified 0.02 0.02 0.07 0.03 rO Detailed 0.01 0.01 0.07 0.03|sQ|Simplified Detailed|0.40 0.38 0.52 0.43 0.18 0.27 0.27 0.26|
||sK|Simplified Detailed|0.40 0.35 0.59 0.42 0.17 0.20 0.39 0.24|
||sV|Simplified Detailed|1.93 1.25 0.48 1.16 0.88 0.54 0.30 0.53|
||sO|Simplified Detailed|1.40 1.34 0.43 1.09 0.66 0.67 0.28 0.55|
||rQ|Simplified Detailed|0.02 0.01 0.03 0.02 0.01 0.01 0.03 0.01|
||rK|Simplified Detailed|0.03 0.03 0.02 0.03 0.02 0.03 0.02 0.03|
||rV|Simplified Detailed|0.03 0.03 0.04 0.03 0.02 0.02 0.03 0.02|
||rO|Simplified Detailed|0.02 0.02 0.07 0.03 0.01 0.01 0.07 0.03|
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|Simplified 0.40 0.44 0.57 0.47 sQ Detailed 0.26 0.36 0.40 0.36 Simplified 0.35 0.28 0.72 0.40 sK Detailed 0.23 0.24 0.51 0.30 Simplified 1.89 1.07 0.54 1.06 sV Detailed 1.28 0.62 0.39 0.68 Geometry Simplified 1.52 1.35 0.42 1.11 sO Detailed 0.97 0.86 0.36 0.73 Simplified 0.02 0.01 0.01 0.01 rQ Detailed 0.01 0.01 0.03 0.01 Simplified 0.01 0.03 0.01 0.02 rK Detailed 0.02 0.03 0.02 0.03 Simplified 0.03 0.02 0.03 0.02 rV Detailed 0.02 0.02 0.03 0.02 Simplified 0.01 0.01 0.06 0.02 rO Detailed 0.01 0.01 0.06 0.02|sQ|Simplified Detailed|0.40 0.44 0.57 0.47 0.26 0.36 0.40 0.36|
||sK|Simplified Detailed|0.35 0.28 0.72 0.40 0.23 0.24 0.51 0.30|
||sV|Simplified Detailed|1.89 1.07 0.54 1.06 1.28 0.62 0.39 0.68|
||sO|Simplified Detailed|1.52 1.35 0.42 1.11 0.97 0.86 0.36 0.73|
||rQ|Simplified Detailed|0.02 0.01 0.01 0.01 0.01 0.01 0.03 0.01|
||rK|Simplified Detailed|0.01 0.03 0.01 0.02 0.02 0.03 0.02 0.03|
||rV|Simplified Detailed|0.03 0.02 0.03 0.02 0.02 0.02 0.03 0.02|
||rO|Simplified Detailed|0.01 0.01 0.06 0.02 0.01 0.01 0.06 0.02|
-----
##### Figure 117: Visualization for MATH-Geometry using Qwen2-1.5B-Instruct on correct responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
None 12.15 6.73 3.11 7.40
_sQ_ Simplified 0.82 0.72 1.04 0.84
Detailed 0.21 0.30 0.23 0.28
None 15.61 11.96 6.78 12.92
_sK_ Simplified 1.09 0.50 1.46 0.90
Detailed 0.20 0.20 0.39 0.25
None 64.92 30.95 4.82 31.89
_sV_ Simplified 4.49 2.32 0.98 2.39
Detailed 1.04 0.56 0.35 0.59
None 44.30 24.03 3.70 22.69
AQuA _sO_ Simplified 3.22 2.45 0.71 2.10
Detailed 0.76 0.69 0.32 0.60
None 0.04 0.08 0.18 0.10
_rQ_ Simplified 0.04 0.01 0.02 0.02
Detailed 0.01 0.01 0.01 0.01
None 0.05 0.05 0.13 0.07
_rK_ Simplified 0.03 0.04 0.01 0.03
Detailed 0.02 0.03 0.01 0.02
None 0.04 0.05 0.09 0.05
_rV_ Simplified 0.03 0.03 0.03 0.03
Detailed 0.03 0.02 0.03 0.02
None 0.03 0.06 0.11 0.06
_rO_ Simplified 0.02 0.03 0.08 0.04
Detailed 0.01 0.02 0.06 0.03
##### Table 106: Statistical results for AQuA using Qwen2- 1.5B-Instruct on correct responses.
#### D.3.2 Wiki Tasks
The visualizations and statistical results on Wiki tasks are shown in Figure 164 and Table ??.
D.4 Instructed LLM on Irrelevant Responses
D.4.1 Reasoning Tasks
The visualizations and statistical results on MATH tasks: MATH-Algebra (Figure 165, Table 113), MATH-Counting (Figure 166, Table 114), MATH- Geometry (Figure 167, Table 115). The visualizations and statistical results on other
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 7.44 4.82 10.79 6.77 sQ Simplified 0.68 0.55 1.04 0.74 Detailed 0.21 0.26 0.20 0.25 None 10.73 8.05 12.32 9.89 sK Simplified 0.81 0.73 0.98 0.83 Detailed 0.22 0.22 0.40 0.28 None 39.35 23.28 7.91 22.41 sV Simplified 3.20 2.42 1.23 2.19 Detailed 1.05 0.65 0.33 0.64 None 27.77 22.56 4.89 18.65 GSM8K sO Simplified 2.25 2.44 0.70 1.92 Detailed 0.73 0.75 0.30 0.62 None 0.03 0.03 0.12 0.05 rQ Simplified 0.02 0.01 0.04 0.02 Detailed 0.01 0.01 0.01 0.01 None 0.03 0.04 0.04 0.04 rK Simplified 0.02 0.04 0.05 0.04 Detailed 0.03 0.03 0.02 0.03 None 0.03 0.05 0.05 0.04 rV Simplified 0.02 0.03 0.04 0.03 Detailed 0.02 0.02 0.03 0.02 None 0.02 0.04 0.08 0.05 rO Simplified 0.01 0.02 0.08 0.03 Detailed 0.01 0.02 0.06 0.03|sQ|None Simplified Detailed|7.44 4.82 10.79 6.77 0.68 0.55 1.04 0.74 0.21 0.26 0.20 0.25|
||sK|None Simplified Detailed|10.73 8.05 12.32 9.89 0.81 0.73 0.98 0.83 0.22 0.22 0.40 0.28|
||sV|None Simplified Detailed|39.35 23.28 7.91 22.41 3.20 2.42 1.23 2.19 1.05 0.65 0.33 0.64|
||sO|None Simplified Detailed|27.77 22.56 4.89 18.65 2.25 2.44 0.70 1.92 0.73 0.75 0.30 0.62|
||rQ|None Simplified Detailed|0.03 0.03 0.12 0.05 0.02 0.01 0.04 0.02 0.01 0.01 0.01 0.01|
||rK|None Simplified Detailed|0.03 0.04 0.04 0.04 0.02 0.04 0.05 0.04 0.03 0.03 0.02 0.03|
||rV|None Simplified Detailed|0.03 0.05 0.05 0.04 0.02 0.03 0.04 0.03 0.02 0.02 0.03 0.02|
||rO|None Simplified Detailed|0.02 0.04 0.08 0.05 0.01 0.02 0.08 0.03 0.01 0.02 0.06 0.03|
##### Table 107: Statistical results for GSM8K using Qwen2- 1.5B-Instruct on correct responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
None 3.76 2.78 0.71 2.57
_sQ_ Simplified 0.74 0.51 0.32 0.56
Detailed 0.28 0.35 0.22 0.32
None 8.61 6.27 2.09 5.67
_sK_ Simplified 1.18 0.70 0.47 0.74
Detailed 0.27 0.20 0.42 0.27
None 31.00 16.26 3.75 15.97
_sV_ Simplified 5.58 2.28 1.44 2.71
Detailed 1.59 0.69 0.56 0.82
None 22.21 9.87 3.95 10.76
StrategyQA _sO_ Simplified 3.85 2.11 1.15 2.18
Detailed 1.03 0.97 0.60 0.86
None 0.05 0.06 0.09 0.07
_rQ_ Simplified 0.02 0.01 0.02 0.02
Detailed 0.01 0.01 0.01 0.01
None 0.06 0.04 0.06 0.05
_rK_ Simplified 0.05 0.03 0.03 0.04
Detailed 0.03 0.03 0.01 0.02
None 0.04 0.08 0.07 0.06
_rV_ Simplified 0.03 0.06 0.08 0.05
Detailed 0.02 0.03 0.04 0.03
None 0.03 0.06 0.05 0.05
_rO_ Simplified 0.01 0.04 0.08 0.04
Detailed 0.01 0.02 0.05 0.02
##### Table 108: Statistical results for StrategyQA using Qwen2-1.5B-Instruct on correct responses.
#### reasoning tasks: AQuA (Figure 168, Table 116), GSM8K (Figure 169, Table 117), StrategyQA (Fig- ure 170, Table 118), ECQA (Figure 171, Table
119), CREAK (Figure 172, Table 120), Sensemak- ing (Figure 173, Table 121).
D.4.2 Wiki Tasks The visualizations and statistical results on Wiki tasks are shown in Figure 174 and Table 122.
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 12.15 6.73 3.11 7.40 sQ Simplified 0.82 0.72 1.04 0.84 Detailed 0.21 0.30 0.23 0.28 None 15.61 11.96 6.78 12.92 sK Simplified 1.09 0.50 1.46 0.90 Detailed 0.20 0.20 0.39 0.25 None 64.92 30.95 4.82 31.89 sV Simplified 4.49 2.32 0.98 2.39 Detailed 1.04 0.56 0.35 0.59 None 44.30 24.03 3.70 22.69 AQuA sO Simplified 3.22 2.45 0.71 2.10 Detailed 0.76 0.69 0.32 0.60 None 0.04 0.08 0.18 0.10 rQ Simplified 0.04 0.01 0.02 0.02 Detailed 0.01 0.01 0.01 0.01 None 0.05 0.05 0.13 0.07 rK Simplified 0.03 0.04 0.01 0.03 Detailed 0.02 0.03 0.01 0.02 None 0.04 0.05 0.09 0.05 rV Simplified 0.03 0.03 0.03 0.03 Detailed 0.03 0.02 0.03 0.02 None 0.03 0.06 0.11 0.06 rO Simplified 0.02 0.03 0.08 0.04 Detailed 0.01 0.02 0.06 0.03||||
||sK|None Simplified Detailed|15.61 11.96 6.78 12.92 1.09 0.50 1.46 0.90 0.20 0.20 0.39 0.25|
||sV|None Simplified Detailed|64.92 30.95 4.82 31.89 4.49 2.32 0.98 2.39 1.04 0.56 0.35 0.59|
||sO|None Simplified Detailed|44.30 24.03 3.70 22.69 3.22 2.45 0.71 2.10 0.76 0.69 0.32 0.60|
||rQ|None Simplified Detailed|0.04 0.08 0.18 0.10 0.04 0.01 0.02 0.02 0.01 0.01 0.01 0.01|
||rK|None Simplified Detailed|0.05 0.05 0.13 0.07 0.03 0.04 0.01 0.03 0.02 0.03 0.01 0.02|
||rV|None Simplified Detailed|0.04 0.05 0.09 0.05 0.03 0.03 0.03 0.03 0.03 0.02 0.03 0.02|
||rO|None Simplified Detailed|0.03 0.06 0.11 0.06 0.02 0.03 0.08 0.04 0.01 0.02 0.06 0.03|
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 3.76 2.78 0.71 2.57 sQ Simplified 0.74 0.51 0.32 0.56 Detailed 0.28 0.35 0.22 0.32 None 8.61 6.27 2.09 5.67 sK Simplified 1.18 0.70 0.47 0.74 Detailed 0.27 0.20 0.42 0.27 None 31.00 16.26 3.75 15.97 sV Simplified 5.58 2.28 1.44 2.71 Detailed 1.59 0.69 0.56 0.82 None 22.21 9.87 3.95 10.76 StrategyQA sO Simplified 3.85 2.11 1.15 2.18 Detailed 1.03 0.97 0.60 0.86 None 0.05 0.06 0.09 0.07 rQ Simplified 0.02 0.01 0.02 0.02 Detailed 0.01 0.01 0.01 0.01 None 0.06 0.04 0.06 0.05 rK Simplified 0.05 0.03 0.03 0.04 Detailed 0.03 0.03 0.01 0.02 None 0.04 0.08 0.07 0.06 rV Simplified 0.03 0.06 0.08 0.05 Detailed 0.02 0.03 0.04 0.03 None 0.03 0.06 0.05 0.05 rO Simplified 0.01 0.04 0.08 0.04 Detailed 0.01 0.02 0.05 0.02|sQ|None Simplified Detailed|3.76 2.78 0.71 2.57 0.74 0.51 0.32 0.56 0.28 0.35 0.22 0.32|
||sK|None Simplified Detailed|8.61 6.27 2.09 5.67 1.18 0.70 0.47 0.74 0.27 0.20 0.42 0.27|
||sV|None Simplified Detailed|31.00 16.26 3.75 15.97 5.58 2.28 1.44 2.71 1.59 0.69 0.56 0.82|
||sO|None Simplified Detailed|22.21 9.87 3.95 10.76 3.85 2.11 1.15 2.18 1.03 0.97 0.60 0.86|
||rQ|None Simplified Detailed|0.05 0.06 0.09 0.07 0.02 0.01 0.02 0.02 0.01 0.01 0.01 0.01|
||rK|None Simplified Detailed|0.06 0.04 0.06 0.05 0.05 0.03 0.03 0.04 0.03 0.03 0.01 0.02|
||rV|None Simplified Detailed|0.04 0.08 0.07 0.06 0.03 0.06 0.08 0.05 0.02 0.03 0.04 0.03|
||rO|None Simplified Detailed|0.03 0.06 0.05 0.05 0.01 0.04 0.08 0.04 0.01 0.02 0.05 0.02|
-----
##### Figure 118: Visualization for AQuA using Qwen2-1.5B-Instruct on correct responses.
Figure 119: Visualization for GSM8K using Qwen2-1.5B-Instruct on correct responses.
-----
##### Figure 120: Visualization for StrategyQA using Qwen2-1.5B-Instruct on correct responses.
Figure 121: Visualization for ECQA using Qwen2-1.5B-Instruct on correct responses.
-----
##### Figure 122: Visualization for CREAK using Qwen2-1.5B-Instruct on correct responses.
Figure 123: Visualization for Sensemaking using Qwen2-1.5B-Instruct on correct responses.
-----
##### Figure 124: Visualization for Wiki tasks using Qwen2-1.5B-Instruct on correct responses.
-----
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 0.70 0.72 0.40 0.66 sQ Simplified 2.08 1.35 2.43 1.86 Detailed 0.31 0.39 0.34 0.38 None 0.91 0.51 0.83 0.71 sK Simplified 2.65 2.32 2.27 2.38 Detailed 0.27 0.31 0.60 0.37 None 3.89 1.78 0.32 1.82 sV Simplified 12.02 6.20 3.13 6.45 Detailed 1.54 0.77 0.48 0.83 None 2.88 1.62 0.38 1.56 Sensemaking sO Simplified 8.04 5.68 1.98 5.11 Detailed 1.08 0.91 0.49 0.83 None 0.03 0.05 0.18 0.08 rQ Simplified 0.02 0.02 0.08 0.03 Detailed 0.01 0.01 0.01 0.01 None 0.03 0.05 0.13 0.06 rK Simplified 0.03 0.05 0.03 0.04 Detailed 0.02 0.03 0.02 0.03 None 0.03 0.04 0.05 0.04 rV Simplified 0.03 0.04 0.04 0.04 Detailed 0.03 0.02 0.03 0.02 None 0.02 0.06 0.10 0.06 rO Simplified 0.02 0.03 0.07 0.04 Detailed 0.01 0.02 0.05 0.02|sQ|None Simplified Detailed|0.70 0.72 0.40 0.66 2.08 1.35 2.43 1.86 0.31 0.39 0.34 0.38|
||sK|None Simplified Detailed|0.91 0.51 0.83 0.71 2.65 2.32 2.27 2.38 0.27 0.31 0.60 0.37|
||sV|None Simplified Detailed|3.89 1.78 0.32 1.82 12.02 6.20 3.13 6.45 1.54 0.77 0.48 0.83|
||sO|None Simplified Detailed|2.88 1.62 0.38 1.56 8.04 5.68 1.98 5.11 1.08 0.91 0.49 0.83|
||rQ|None Simplified Detailed|0.03 0.05 0.18 0.08 0.02 0.02 0.08 0.03 0.01 0.01 0.01 0.01|
||rK|None Simplified Detailed|0.03 0.05 0.13 0.06 0.03 0.05 0.03 0.04 0.02 0.03 0.02 0.03|
||rV|None Simplified Detailed|0.03 0.04 0.05 0.04 0.03 0.04 0.04 0.04 0.03 0.02 0.03 0.02|
||rO|None Simplified Detailed|0.02 0.06 0.10 0.06 0.02 0.03 0.07 0.04 0.01 0.02 0.05 0.02|
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 7.09 4.87 3.16 5.15 sQ Simplified 1.34 0.85 1.01 1.07 Detailed 0.33 0.41 0.25 0.38 None 9.03 6.21 12.09 7.84 sK Simplified 1.75 1.19 1.75 1.42 Detailed 0.30 0.27 0.51 0.33 None 42.20 21.30 4.66 21.30 sV Simplified 8.69 3.82 1.97 4.27 Detailed 1.65 0.81 0.58 0.90 None 30.30 19.50 3.53 17.11 ECQA sO Simplified 6.03 3.86 1.23 3.59 Detailed 1.09 1.07 0.60 0.94 None 0.02 0.07 0.13 0.07 rQ Simplified 0.02 0.01 0.04 0.02 Detailed 0.01 0.01 0.01 0.01 None 0.04 0.06 0.08 0.06 rK Simplified 0.03 0.05 0.02 0.04 Detailed 0.02 0.03 0.01 0.02 None 0.04 0.04 0.07 0.05 rV Simplified 0.03 0.04 0.03 0.03 Detailed 0.03 0.02 0.03 0.02 None 0.02 0.07 0.12 0.07 rO Simplified 0.02 0.03 0.06 0.04 Detailed 0.01 0.02 0.05 0.03|sQ|None Simplified Detailed|7.09 4.87 3.16 5.15 1.34 0.85 1.01 1.07 0.33 0.41 0.25 0.38|
||sK|None Simplified Detailed|9.03 6.21 12.09 7.84 1.75 1.19 1.75 1.42 0.30 0.27 0.51 0.33|
||sV|None Simplified Detailed|42.20 21.30 4.66 21.30 8.69 3.82 1.97 4.27 1.65 0.81 0.58 0.90|
||sO|None Simplified Detailed|30.30 19.50 3.53 17.11 6.03 3.86 1.23 3.59 1.09 1.07 0.60 0.94|
||rQ|None Simplified Detailed|0.02 0.07 0.13 0.07 0.02 0.01 0.04 0.02 0.01 0.01 0.01 0.01|
||rK|None Simplified Detailed|0.04 0.06 0.08 0.06 0.03 0.05 0.02 0.04 0.02 0.03 0.01 0.02|
||rV|None Simplified Detailed|0.04 0.04 0.07 0.05 0.03 0.04 0.03 0.03 0.03 0.02 0.03 0.02|
||rO|None Simplified Detailed|0.02 0.07 0.12 0.07 0.02 0.03 0.06 0.04 0.01 0.02 0.05 0.03|
##### Table 109: Statistical results for ECQA using Qwen2- 1.5B-Instruct on correct responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
None 9.33 8.89 5.25 7.94
_sQ_ Simplified 1.79 1.14 1.94 1.57
Detailed 0.32 0.39 0.22 0.36
None 12.71 19.58 15.92 15.92
_sK_ Simplified 2.27 1.97 2.13 2.07
Detailed 0.31 0.25 0.47 0.31
None 64.10 28.13 7.38 30.21
_sV_ Simplified 11.90 5.69 2.98 6.11
Detailed 1.82 0.85 0.60 0.95
None 42.64 26.70 3.97 23.80
CREAK _sO_ Simplified 8.02 5.24 1.66 4.80
Detailed 1.17 1.09 0.59 0.96
None 0.03 0.08 0.15 0.08
_rQ_ Simplified 0.02 0.02 0.06 0.03
Detailed 0.01 0.01 0.01 0.01
None 0.03 0.07 0.08 0.06
_rK_ Simplified 0.03 0.04 0.03 0.03
Detailed 0.03 0.02 0.01 0.02
None 0.04 0.06 0.08 0.06
_rV_ Simplified 0.03 0.05 0.05 0.04
Detailed 0.02 0.03 0.04 0.03
None 0.02 0.06 0.10 0.06
_rO_ Simplified 0.02 0.03 0.07 0.04
Detailed 0.01 0.02 0.05 0.02
##### Table 110: Statistical results for CREAK using Qwen2- 1.5B-Instruct on correct responses.
##### Table 111: Statistical results for Sensemaking using Qwen2-1.5B-Instruct on correct responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
Len 100 0.54 0.37 0.34 0.41
_sQ_ Len 500 0.37 0.33 0.24 0.32
Len 1000 0.32 0.31 0.22 0.30
Unpopular 0.58 0.47 0.24 0.43
Len 100 0.66 0.28 0.37 0.40
_sK_ Len 500 0.36 0.13 0.29 0.22
Len 1000 0.27 0.12 0.32 0.19
Unpopular 0.74 0.60 0.34 0.54
Len 100 3.94 1.70 1.13 2.03
_sV_ Len 500 2.45 1.01 0.78 1.26
Len 1000 1.91 0.74 0.59 0.95
Unpopular 6.00 1.94 1.03 2.57
Len 100 2.68 1.66 0.86 1.66
_sO_ Len 500 1.76 1.24 0.68 1.18
Wiki Len 1000 1.44 1.04 0.57 0.98
Unpopular 4.09 1.76 0.80 1.98
Len 100 0.02 0.01 0.03 0.02
_rQ_ Len 500 0.01 0.01 0.01 0.01
Len 1000 0.01 0.01 0.01 0.01
Unpopular 0.02 0.02 0.03 0.02
Len 100 0.03 0.02 0.02 0.03
_rK_ Len 500 0.02 0.02 0.02 0.02
Len 1000 0.02 0.02 0.02 0.02
Unpopular 0.05 0.03 0.04 0.04
Len 100 0.03 0.06 0.09 0.06
_rV_ Len 500 0.02 0.03 0.05 0.03
Len 1000 0.02 0.03 0.04 0.03
Unpopular 0.04 0.06 0.07 0.05
Len 100 0.02 0.03 0.07 0.04
_rO_ Len 500 0.01 0.02 0.05 0.03
Len 1000 0.01 0.01 0.05 0.02
Unpopular 0.01 0.04 0.09 0.05
##### Table 112: Statistical results for Wiki using Qwen2- 1.5B-Instruct on correct responses.
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 9.33 8.89 5.25 7.94 sQ Simplified 1.79 1.14 1.94 1.57 Detailed 0.32 0.39 0.22 0.36 None 12.71 19.58 15.92 15.92 sK Simplified 2.27 1.97 2.13 2.07 Detailed 0.31 0.25 0.47 0.31 None 64.10 28.13 7.38 30.21 sV Simplified 11.90 5.69 2.98 6.11 Detailed 1.82 0.85 0.60 0.95 None 42.64 26.70 3.97 23.80 CREAK sO Simplified 8.02 5.24 1.66 4.80 Detailed 1.17 1.09 0.59 0.96 None 0.03 0.08 0.15 0.08 rQ Simplified 0.02 0.02 0.06 0.03 Detailed 0.01 0.01 0.01 0.01 None 0.03 0.07 0.08 0.06 rK Simplified 0.03 0.04 0.03 0.03 Detailed 0.03 0.02 0.01 0.02 None 0.04 0.06 0.08 0.06 rV Simplified 0.03 0.05 0.05 0.04 Detailed 0.02 0.03 0.04 0.03 None 0.02 0.06 0.10 0.06 rO Simplified 0.02 0.03 0.07 0.04 Detailed 0.01 0.02 0.05 0.02|sQ|None Simplified Detailed|9.33 8.89 5.25 7.94 1.79 1.14 1.94 1.57 0.32 0.39 0.22 0.36|
||sK|None Simplified Detailed|12.71 19.58 15.92 15.92 2.27 1.97 2.13 2.07 0.31 0.25 0.47 0.31|
||sV|None Simplified Detailed|64.10 28.13 7.38 30.21 11.90 5.69 2.98 6.11 1.82 0.85 0.60 0.95|
||sO|None Simplified Detailed|42.64 26.70 3.97 23.80 8.02 5.24 1.66 4.80 1.17 1.09 0.59 0.96|
||rQ|None Simplified Detailed|0.03 0.08 0.15 0.08 0.02 0.02 0.06 0.03 0.01 0.01 0.01 0.01|
||rK|None Simplified Detailed|0.03 0.07 0.08 0.06 0.03 0.04 0.03 0.03 0.03 0.02 0.01 0.02|
||rV|None Simplified Detailed|0.04 0.06 0.08 0.06 0.03 0.05 0.05 0.04 0.02 0.03 0.04 0.03|
||rO|None Simplified Detailed|0.02 0.06 0.10 0.06 0.02 0.03 0.07 0.04 0.01 0.02 0.05 0.02|
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|Len 100 0.54 0.37 0.34 0.41 sQ Len 500 0.37 0.33 0.24 0.32 Len 1000 0.32 0.31 0.22 0.30 Unpopular 0.58 0.47 0.24 0.43 Len 100 0.66 0.28 0.37 0.40 sK Len 500 0.36 0.13 0.29 0.22 Len 1000 0.27 0.12 0.32 0.19 Unpopular 0.74 0.60 0.34 0.54 Len 100 3.94 1.70 1.13 2.03 sV Len 500 2.45 1.01 0.78 1.26 Len 1000 1.91 0.74 0.59 0.95 Unpopular 6.00 1.94 1.03 2.57 Len 100 2.68 1.66 0.86 1.66 sO Len 500 1.76 1.24 0.68 1.18 Wiki Len 1000 1.44 1.04 0.57 0.98 Unpopular 4.09 1.76 0.80 1.98 Len 100 0.02 0.01 0.03 0.02 rQ Len 500 0.01 0.01 0.01 0.01 Len 1000 0.01 0.01 0.01 0.01 Unpopular 0.02 0.02 0.03 0.02 Len 100 0.03 0.02 0.02 0.03 rK Len 500 0.02 0.02 0.02 0.02 Len 1000 0.02 0.02 0.02 0.02 Unpopular 0.05 0.03 0.04 0.04 Len 100 0.03 0.06 0.09 0.06 rV Len 500 0.02 0.03 0.05 0.03 Len 1000 0.02 0.03 0.04 0.03 Unpopular 0.04 0.06 0.07 0.05 Len 100 0.02 0.03 0.07 0.04 rO Len 500 0.01 0.02 0.05 0.03 Len 1000 0.01 0.01 0.05 0.02 Unpopular 0.01 0.04 0.09 0.05|sQ|Len 100 Len 500 Len 1000 Unpopular|0.54 0.37 0.34 0.41 0.37 0.33 0.24 0.32 0.32 0.31 0.22 0.30 0.58 0.47 0.24 0.43|
||sK|Len 100 Len 500 Len 1000 Unpopular|0.66 0.28 0.37 0.40 0.36 0.13 0.29 0.22 0.27 0.12 0.32 0.19 0.74 0.60 0.34 0.54|
||sV|Len 100 Len 500 Len 1000 Unpopular|3.94 1.70 1.13 2.03 2.45 1.01 0.78 1.26 1.91 0.74 0.59 0.95 6.00 1.94 1.03 2.57|
||sO|Len 100 Len 500 Len 1000 Unpopular|2.68 1.66 0.86 1.66 1.76 1.24 0.68 1.18 1.44 1.04 0.57 0.98 4.09 1.76 0.80 1.98|
||rQ|Len 100 Len 500 Len 1000 Unpopular|0.02 0.01 0.03 0.02 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.02 0.02 0.03 0.02|
||rK|Len 100 Len 500 Len 1000 Unpopular|0.03 0.02 0.02 0.03 0.02 0.02 0.02 0.02 0.02 0.02 0.02 0.02 0.05 0.03 0.04 0.04|
||rV|Len 100 Len 500 Len 1000 Unpopular|0.03 0.06 0.09 0.06 0.02 0.03 0.05 0.03 0.02 0.03 0.04 0.03 0.04 0.06 0.07 0.05|
||rO|Len 100 Len 500 Len 1000 Unpopular|0.02 0.03 0.07 0.04 0.01 0.02 0.05 0.03 0.01 0.01 0.05 0.02 0.01 0.04 0.09 0.05|
-----
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|Simplified 0.71 0.78 1.27 0.90 sQ Detailed 0.42 0.51 0.59 0.51 Simplified 0.71 0.52 1.27 0.71 sK Detailed 0.40 0.30 0.64 0.40 Simplified 3.35 2.29 1.01 2.10 sV Detailed 1.74 1.02 0.54 1.01 Algebra Simplified 2.48 2.53 0.72 1.97 sO Detailed 1.34 1.27 0.42 1.02 Simplified 0.01 0.02 0.05 0.03 rQ Detailed 0.01 0.02 0.04 0.02 Simplified 0.02 0.03 0.03 0.03 rK Detailed 0.02 0.04 0.03 0.03 Simplified 0.03 0.04 0.04 0.03 rV Detailed 0.03 0.03 0.03 0.03 Simplified 0.01 0.02 0.06 0.03 rO Detailed 0.01 0.01 0.06 0.02|sQ|Simplified Detailed|0.71 0.78 1.27 0.90 0.42 0.51 0.59 0.51|
||sK|Simplified Detailed|0.71 0.52 1.27 0.71 0.40 0.30 0.64 0.40|
||sV|Simplified Detailed|3.35 2.29 1.01 2.10 1.74 1.02 0.54 1.01|
||sO|Simplified Detailed|2.48 2.53 0.72 1.97 1.34 1.27 0.42 1.02|
||rQ|Simplified Detailed|0.01 0.02 0.05 0.03 0.01 0.02 0.04 0.02|
||rK|Simplified Detailed|0.02 0.03 0.03 0.03 0.02 0.04 0.03 0.03|
||rV|Simplified Detailed|0.03 0.04 0.04 0.03 0.03 0.03 0.03 0.03|
||rO|Simplified Detailed|0.01 0.02 0.06 0.03 0.01 0.01 0.06 0.02|
##### Table 113: Statistical results for MATH-Algebra using Qwen2-1.5B-Instruct on irrelevant responses.
Figure 125: Visualization for MATH-Algebra using Qwen2-1.5B-Instruct on irrelevant responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
Simplified 0.73 0.79 1.05 0.85
_sQ_ Detailed 0.47 0.52 0.47 0.50
Simplified 0.72 0.43 1.06 0.61
_sK_ Detailed 0.43 0.29 0.55 0.38
Simplified 3.50 2.25 1.08 2.14
_sV_ Detailed 1.96 0.96 0.58 1.04
Counting Simplified 2.66 2.43 0.84 1.98
_sO_ Detailed 1.48 1.24 0.50 1.06
Simplified 0.01 0.02 0.05 0.02
_rQ_ Detailed 0.01 0.02 0.04 0.02
Simplified 0.02 0.03 0.02 0.03
_rK_ Detailed 0.02 0.04 0.03 0.03
Simplified 0.02 0.04 0.04 0.04
_rV_ Detailed 0.03 0.03 0.03 0.03
Simplified 0.01 0.02 0.06 0.03
_rO_ Detailed 0.01 0.01 0.05 0.02
##### Table 114: Statistical results for MATH-Counting using Qwen2-1.5B-Instruct on irrelevant responses.
##### Figure 126: Visualization for MATH-Counting using Qwen2-1.5B-Instruct on irrelevant responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
Simplified 0.62 0.69 1.20 0.80
_sQ_ Detailed 0.57 0.48 0.66 0.54
Simplified 0.58 0.41 1.26 0.63
_sK_ Detailed 0.49 0.25 0.85 0.45
Simplified 3.09 1.82 0.94 1.81
_sV_ Detailed 2.41 1.00 0.60 1.19
Geometry Simplified 2.43 2.26 0.64 1.79
_sO_ Detailed 1.89 1.48 0.50 1.26
Simplified 0.01 0.01 0.04 0.02
_rQ_ Detailed 0.01 0.02 0.02 0.02
Simplified 0.01 0.03 0.02 0.03
_rK_ Detailed 0.02 0.03 0.05 0.03
Simplified 0.02 0.03 0.03 0.03
_rV_ Detailed 0.03 0.03 0.03 0.03
Simplified 0.01 0.01 0.05 0.02
_rO_ Detailed 0.01 0.01 0.05 0.02
##### Table 115: Statistical results for MATH-Geometry using Qwen2-1.5B-Instruct on irrelevant responses.
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|Simplified 0.62 0.69 1.20 0.80 sQ Detailed 0.57 0.48 0.66 0.54 Simplified 0.58 0.41 1.26 0.63 sK Detailed 0.49 0.25 0.85 0.45 Simplified 3.09 1.82 0.94 1.81 sV Detailed 2.41 1.00 0.60 1.19 Geometry Simplified 2.43 2.26 0.64 1.79 sO Detailed 1.89 1.48 0.50 1.26 Simplified 0.01 0.01 0.04 0.02 rQ Detailed 0.01 0.02 0.02 0.02 Simplified 0.01 0.03 0.02 0.03 rK Detailed 0.02 0.03 0.05 0.03 Simplified 0.02 0.03 0.03 0.03 rV Detailed 0.03 0.03 0.03 0.03 Simplified 0.01 0.01 0.05 0.02 rO Detailed 0.01 0.01 0.05 0.02|sQ|Simplified Detailed|0.62 0.69 1.20 0.80 0.57 0.48 0.66 0.54|
||sK|Simplified Detailed|0.58 0.41 1.26 0.63 0.49 0.25 0.85 0.45|
||sO|Simplified Detailed|2.43 2.26 0.64 1.79 1.89 1.48 0.50 1.26|
||rQ|Simplified Detailed|0.01 0.01 0.04 0.02 0.01 0.02 0.02 0.02|
||rK|Simplified Detailed|0.01 0.03 0.02 0.03 0.02 0.03 0.05 0.03|
||rV|Simplified Detailed|0.02 0.03 0.03 0.03 0.03 0.03 0.03 0.03|
||rO|Simplified Detailed|0.01 0.01 0.05 0.02 0.01 0.01 0.05 0.02|
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|Simplified 0.73 0.79 1.05 0.85 sQ Detailed 0.47 0.52 0.47 0.50 Simplified 0.72 0.43 1.06 0.61 sK Detailed 0.43 0.29 0.55 0.38 Simplified 3.50 2.25 1.08 2.14 sV Detailed 1.96 0.96 0.58 1.04 Counting Simplified 2.66 2.43 0.84 1.98 sO Detailed 1.48 1.24 0.50 1.06 Simplified 0.01 0.02 0.05 0.02 rQ Detailed 0.01 0.02 0.04 0.02 Simplified 0.02 0.03 0.02 0.03 rK Detailed 0.02 0.04 0.03 0.03 Simplified 0.02 0.04 0.04 0.04 rV Detailed 0.03 0.03 0.03 0.03 Simplified 0.01 0.02 0.06 0.03 rO Detailed 0.01 0.01 0.05 0.02|sQ|Simplified Detailed|0.73 0.79 1.05 0.85 0.47 0.52 0.47 0.50|
||sK|Simplified Detailed|0.72 0.43 1.06 0.61 0.43 0.29 0.55 0.38|
||sV|Simplified Detailed|3.50 2.25 1.08 2.14 1.96 0.96 0.58 1.04|
||sO|Simplified Detailed|2.66 2.43 0.84 1.98 1.48 1.24 0.50 1.06|
||rQ|Simplified Detailed|0.01 0.02 0.05 0.02 0.01 0.02 0.04 0.02|
||rK|Simplified Detailed|0.02 0.03 0.02 0.03 0.02 0.04 0.03 0.03|
||rV|Simplified Detailed|0.02 0.04 0.04 0.04 0.03 0.03 0.03 0.03|
||rO|Simplified Detailed|0.01 0.02 0.06 0.03 0.01 0.01 0.05 0.02|
-----
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 7.27 4.88 12.46 7.22 sQ Simplified 1.29 1.03 2.17 1.42 Detailed 0.52 0.49 0.47 0.51 None 10.22 8.58 12.97 10.05 sK Simplified 1.44 1.26 2.16 1.48 Detailed 0.51 0.36 0.62 0.44 None 39.15 24.33 8.64 22.97 sV Simplified 6.56 4.21 1.80 3.98 Detailed 2.38 1.30 0.72 1.35 None 27.74 23.41 5.31 19.20 GSM8K sO Simplified 4.80 4.48 1.20 3.61 Detailed 1.65 1.49 0.55 1.24 None 0.03 0.03 0.11 0.05 rQ Simplified 0.01 0.01 0.07 0.03 Detailed 0.01 0.02 0.04 0.02 None 0.03 0.04 0.04 0.03 rK Simplified 0.03 0.04 0.01 0.03 Detailed 0.02 0.05 0.02 0.04 None 0.03 0.05 0.05 0.04 rV Simplified 0.02 0.05 0.05 0.04 Detailed 0.02 0.03 0.03 0.03 None 0.02 0.04 0.07 0.04 rO Simplified 0.02 0.02 0.07 0.03 Detailed 0.01 0.01 0.05 0.02|sQ|None Simplified Detailed|7.27 4.88 12.46 7.22 1.29 1.03 2.17 1.42 0.52 0.49 0.47 0.51|
||sK|None Simplified Detailed|10.22 8.58 12.97 10.05 1.44 1.26 2.16 1.48 0.51 0.36 0.62 0.44|
||sV|None Simplified Detailed|39.15 24.33 8.64 22.97 6.56 4.21 1.80 3.98 2.38 1.30 0.72 1.35|
||sO|None Simplified Detailed|27.74 23.41 5.31 19.20 4.80 4.48 1.20 3.61 1.65 1.49 0.55 1.24|
||rQ|None Simplified Detailed|0.03 0.03 0.11 0.05 0.01 0.01 0.07 0.03 0.01 0.02 0.04 0.02|
||rK|None Simplified Detailed|0.03 0.04 0.04 0.03 0.03 0.04 0.01 0.03 0.02 0.05 0.02 0.04|
||rV|None Simplified Detailed|0.03 0.05 0.05 0.04 0.02 0.05 0.05 0.04 0.02 0.03 0.03 0.03|
||rO|None Simplified Detailed|0.02 0.04 0.07 0.04 0.02 0.02 0.07 0.03 0.01 0.01 0.05 0.02|
##### Figure 127: Visualization for MATH-Geometry using Qwen2-1.5B-Instruct on irrelevant responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
None 12.33 6.91 3.27 7.57
_sQ_ Simplified 1.13 1.01 1.55 1.19
Detailed 0.54 0.53 0.47 0.53
None 15.89 12.20 7.05 13.20
_sK_ Simplified 1.37 0.72 1.82 1.10
Detailed 0.54 0.34 0.59 0.43
None 66.05 31.50 4.98 32.48
_sV_ Simplified 6.27 3.21 1.60 3.35
Detailed 2.42 1.21 0.76 1.32
None 45.04 24.42 3.74 23.06
AQuA _sO_ Simplified 4.58 3.30 1.09 2.90
Detailed 1.77 1.44 0.58 1.24
None 0.04 0.08 0.18 0.10
_rQ_ Simplified 0.02 0.01 0.04 0.02
Detailed 0.01 0.02 0.03 0.02
None 0.05 0.05 0.13 0.07
_rK_ Simplified 0.03 0.04 0.03 0.03
Detailed 0.02 0.04 0.03 0.03
None 0.04 0.04 0.09 0.05
_rV_ Simplified 0.03 0.05 0.05 0.04
Detailed 0.03 0.04 0.04 0.03
None 0.03 0.06 0.11 0.06
_rO_ Simplified 0.01 0.02 0.08 0.03
Detailed 0.01 0.01 0.06 0.02
##### Table 116: Statistical results for AQuA using Qwen2- 1.5B-Instruct on irrelevant responses.
##### Table 117: Statistical results for GSM8K using Qwen2- 1.5B-Instruct on irrelevant responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
None 3.80 2.86 0.75 2.64
_sQ_ Simplified 0.97 0.70 0.42 0.72
Detailed 0.66 0.46 0.27 0.47
None 8.60 6.48 2.08 5.77
_sK_ Simplified 1.39 0.79 0.73 0.94
Detailed 0.52 0.35 0.42 0.39
None 31.41 16.63 3.81 16.24
_sV_ Simplified 7.36 2.47 1.64 3.31
Detailed 3.06 1.03 0.88 1.43
None 22.50 10.10 3.98 10.96
StrategyQA _sO_ Simplified 5.21 2.66 1.21 2.82
Detailed 2.24 1.56 0.79 1.47
None 0.05 0.06 0.09 0.07
_rQ_ Simplified 0.01 0.02 0.02 0.02
Detailed 0.02 0.02 0.01 0.02
None 0.06 0.04 0.06 0.05
_rK_ Simplified 0.04 0.03 0.02 0.03
Detailed 0.02 0.03 0.02 0.03
None 0.04 0.08 0.07 0.06
_rV_ Simplified 0.03 0.07 0.10 0.06
Detailed 0.02 0.04 0.05 0.03
None 0.03 0.06 0.05 0.05
_rO_ Simplified 0.01 0.03 0.07 0.03
Detailed 0.01 0.01 0.05 0.02
##### Table 118: Statistical results for StrategyQA using Qwen2-1.5B-Instruct on irrelevant responses.
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 12.33 6.91 3.27 7.57 sQ Simplified 1.13 1.01 1.55 1.19 Detailed 0.54 0.53 0.47 0.53 None 15.89 12.20 7.05 13.20 sK Simplified 1.37 0.72 1.82 1.10 Detailed 0.54 0.34 0.59 0.43 None 66.05 31.50 4.98 32.48 sV Simplified 6.27 3.21 1.60 3.35 Detailed 2.42 1.21 0.76 1.32 None 45.04 24.42 3.74 23.06 AQuA sO Simplified 4.58 3.30 1.09 2.90 Detailed 1.77 1.44 0.58 1.24 None 0.04 0.08 0.18 0.10 rQ Simplified 0.02 0.01 0.04 0.02 Detailed 0.01 0.02 0.03 0.02 None 0.05 0.05 0.13 0.07 rK Simplified 0.03 0.04 0.03 0.03 Detailed 0.02 0.04 0.03 0.03 None 0.04 0.04 0.09 0.05 rV Simplified 0.03 0.05 0.05 0.04 Detailed 0.03 0.04 0.04 0.03 None 0.03 0.06 0.11 0.06 rO Simplified 0.01 0.02 0.08 0.03 Detailed 0.01 0.01 0.06 0.02|sQ|None Simplified Detailed|12.33 6.91 3.27 7.57 1.13 1.01 1.55 1.19 0.54 0.53 0.47 0.53|
||sK|None Simplified Detailed|15.89 12.20 7.05 13.20 1.37 0.72 1.82 1.10 0.54 0.34 0.59 0.43|
||sV|None Simplified Detailed|66.05 31.50 4.98 32.48 6.27 3.21 1.60 3.35 2.42 1.21 0.76 1.32|
||sO|None Simplified Detailed|45.04 24.42 3.74 23.06 4.58 3.30 1.09 2.90 1.77 1.44 0.58 1.24|
||rQ|None Simplified Detailed|0.04 0.08 0.18 0.10 0.02 0.01 0.04 0.02 0.01 0.02 0.03 0.02|
||rK|None Simplified Detailed|0.05 0.05 0.13 0.07 0.03 0.04 0.03 0.03 0.02 0.04 0.03 0.03|
||rV|None Simplified Detailed|0.04 0.04 0.09 0.05 0.03 0.05 0.05 0.04 0.03 0.04 0.04 0.03|
||rO|None Simplified Detailed|0.03 0.06 0.11 0.06 0.01 0.02 0.08 0.03 0.01 0.01 0.06 0.02|
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 3.80 2.86 0.75 2.64 sQ Simplified 0.97 0.70 0.42 0.72 Detailed 0.66 0.46 0.27 0.47 None 8.60 6.48 2.08 5.77 sK Simplified 1.39 0.79 0.73 0.94 Detailed 0.52 0.35 0.42 0.39 None 31.41 16.63 3.81 16.24 sV Simplified 7.36 2.47 1.64 3.31 Detailed 3.06 1.03 0.88 1.43 None 22.50 10.10 3.98 10.96 StrategyQA sO Simplified 5.21 2.66 1.21 2.82 Detailed 2.24 1.56 0.79 1.47 None 0.05 0.06 0.09 0.07 rQ Simplified 0.01 0.02 0.02 0.02 Detailed 0.02 0.02 0.01 0.02 None 0.06 0.04 0.06 0.05 rK Simplified 0.04 0.03 0.02 0.03 Detailed 0.02 0.03 0.02 0.03 None 0.04 0.08 0.07 0.06 rV Simplified 0.03 0.07 0.10 0.06 Detailed 0.02 0.04 0.05 0.03 None 0.03 0.06 0.05 0.05 rO Simplified 0.01 0.03 0.07 0.03 Detailed 0.01 0.01 0.05 0.02|sQ|None Simplified Detailed|3.80 2.86 0.75 2.64 0.97 0.70 0.42 0.72 0.66 0.46 0.27 0.47|
||sK|None Simplified Detailed|8.60 6.48 2.08 5.77 1.39 0.79 0.73 0.94 0.52 0.35 0.42 0.39|
||sV|None Simplified Detailed|31.41 16.63 3.81 16.24 7.36 2.47 1.64 3.31 3.06 1.03 0.88 1.43|
||sO|None Simplified Detailed|22.50 10.10 3.98 10.96 5.21 2.66 1.21 2.82 2.24 1.56 0.79 1.47|
||rQ|None Simplified Detailed|0.05 0.06 0.09 0.07 0.01 0.02 0.02 0.02 0.02 0.02 0.01 0.02|
||rK|None Simplified Detailed|0.06 0.04 0.06 0.05 0.04 0.03 0.02 0.03 0.02 0.03 0.02 0.03|
||rV|None Simplified Detailed|0.04 0.08 0.07 0.06 0.03 0.07 0.10 0.06 0.02 0.04 0.05 0.03|
||rO|None Simplified Detailed|0.03 0.06 0.05 0.05 0.01 0.03 0.07 0.03 0.01 0.01 0.05 0.02|
-----
##### Figure 128: Visualization for AQuA using Qwen2-1.5B-Instruct on irrelevant responses.
Figure 129: Visualization for GSM8K using Qwen2-1.5B-Instruct on irrelevant responses.
-----
##### Figure 130: Visualization for StrategyQA using Qwen2-1.5B-Instruct on irrelevant responses.
Figure 131: Visualization for ECQA using Qwen2-1.5B-Instruct on irrelevant responses.
-----
##### Figure 132: Visualization for CREAK using Qwen2-1.5B-Instruct on irrelevant responses.
Figure 133: Visualization for Sensemaking using Qwen2-1.5B-Instruct on irrelevant responses.
-----
##### Figure 134: Visualization for Wiki tasks using Qwen2-1.5B-Instruct on irrelevant responses.
-----
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 3.04 3.16 4.11 3.41 sQ Simplified 1.58 1.31 1.26 1.43 Detailed 0.71 0.66 0.42 0.62 None 3.70 2.41 6.81 3.76 sK Simplified 2.08 1.45 2.24 1.67 Detailed 0.59 0.50 0.78 0.56 None 17.71 7.85 2.32 8.34 sV Simplified 10.88 5.10 2.28 5.45 Detailed 2.88 1.18 0.74 1.41 None 13.38 7.51 1.74 7.23 Sensemaking sO Simplified 7.24 4.89 1.73 4.47 Detailed 2.18 1.53 0.62 1.40 None 0.03 0.05 0.15 0.07 rQ Simplified 0.02 0.02 0.05 0.03 Detailed 0.01 0.02 0.04 0.02 None 0.03 0.03 0.11 0.05 rK Simplified 0.03 0.04 0.01 0.03 Detailed 0.02 0.05 0.02 0.04 None 0.04 0.03 0.07 0.04 rV Simplified 0.03 0.06 0.05 0.05 Detailed 0.03 0.03 0.03 0.03 None 0.02 0.05 0.09 0.05 rO Simplified 0.02 0.03 0.07 0.04 Detailed 0.01 0.02 0.04 0.02|sQ|None Simplified Detailed|3.04 3.16 4.11 3.41 1.58 1.31 1.26 1.43 0.71 0.66 0.42 0.62|
||sK|None Simplified Detailed|3.70 2.41 6.81 3.76 2.08 1.45 2.24 1.67 0.59 0.50 0.78 0.56|
||sV|None Simplified Detailed|17.71 7.85 2.32 8.34 10.88 5.10 2.28 5.45 2.88 1.18 0.74 1.41|
||sO|None Simplified Detailed|13.38 7.51 1.74 7.23 7.24 4.89 1.73 4.47 2.18 1.53 0.62 1.40|
||rQ|None Simplified Detailed|0.03 0.05 0.15 0.07 0.02 0.02 0.05 0.03 0.01 0.02 0.04 0.02|
||rK|None Simplified Detailed|0.03 0.03 0.11 0.05 0.03 0.04 0.01 0.03 0.02 0.05 0.02 0.04|
||rV|None Simplified Detailed|0.04 0.03 0.07 0.04 0.03 0.06 0.05 0.05 0.03 0.03 0.03 0.03|
||rO|None Simplified Detailed|0.02 0.05 0.09 0.05 0.02 0.03 0.07 0.04 0.01 0.02 0.04 0.02|
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 5.61 3.42 3.16 3.93 sQ Simplified 1.70 1.11 2.04 1.51 Detailed 0.71 0.65 0.37 0.61 None 7.41 4.11 9.45 5.85 sK Simplified 1.96 1.64 3.10 1.93 Detailed 0.65 0.48 0.74 0.56 None 37.75 13.94 5.35 16.54 sV Simplified 11.34 4.99 2.53 5.60 Detailed 3.13 1.24 0.84 1.52 None 25.53 14.51 4.79 13.90 ECQA sO Simplified 8.06 5.22 2.11 4.86 Detailed 2.21 1.73 0.80 1.53 None 0.02 0.05 0.08 0.05 rQ Simplified 0.02 0.01 0.12 0.04 Detailed 0.01 0.02 0.03 0.02 None 0.04 0.05 0.04 0.05 rK Simplified 0.03 0.04 0.06 0.04 Detailed 0.02 0.04 0.04 0.04 None 0.03 0.05 0.06 0.04 rV Simplified 0.03 0.07 0.06 0.05 Detailed 0.02 0.03 0.04 0.03 None 0.02 0.06 0.07 0.05 rO Simplified 0.02 0.02 0.07 0.03 Detailed 0.02 0.02 0.04 0.02|sQ|None Simplified Detailed|5.61 3.42 3.16 3.93 1.70 1.11 2.04 1.51 0.71 0.65 0.37 0.61|
||sK|None Simplified Detailed|7.41 4.11 9.45 5.85 1.96 1.64 3.10 1.93 0.65 0.48 0.74 0.56|
||sV|None Simplified Detailed|37.75 13.94 5.35 16.54 11.34 4.99 2.53 5.60 3.13 1.24 0.84 1.52|
||sO|None Simplified Detailed|25.53 14.51 4.79 13.90 8.06 5.22 2.11 4.86 2.21 1.73 0.80 1.53|
||rQ|None Simplified Detailed|0.02 0.05 0.08 0.05 0.02 0.01 0.12 0.04 0.01 0.02 0.03 0.02|
||rK|None Simplified Detailed|0.04 0.05 0.04 0.05 0.03 0.04 0.06 0.04 0.02 0.04 0.04 0.04|
||rV|None Simplified Detailed|0.03 0.05 0.06 0.04 0.03 0.07 0.06 0.05 0.02 0.03 0.04 0.03|
||rO|None Simplified Detailed|0.02 0.06 0.07 0.05 0.02 0.02 0.07 0.03 0.02 0.02 0.04 0.02|
##### Table 119: Statistical results for ECQA using Qwen2- 1.5B-Instruct on irrelevant responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
None 10.37 10.21 6.14 9.12
_sQ_ Simplified 2.27 1.89 4.72 2.70
Detailed 0.68 0.53 0.33 0.53
None 14.60 18.67 16.06 15.90
_sK_ Simplified 2.58 3.23 5.85 3.54
Detailed 0.58 0.39 0.69 0.48
None 74.28 33.01 8.01 34.90
_sV_ Simplified 16.19 7.40 3.77 8.12
Detailed 3.52 1.35 0.89 1.68
None 48.66 30.51 4.55 27.25
CREAK _sO_ Simplified 11.07 7.13 2.50 6.58
Detailed 2.40 1.79 0.74 1.60
None 0.02 0.07 0.14 0.08
_rQ_ Simplified 0.01 0.03 0.16 0.06
Detailed 0.01 0.01 0.03 0.02
None 0.03 0.06 0.07 0.05
_rK_ Simplified 0.03 0.05 0.08 0.05
Detailed 0.02 0.03 0.04 0.03
None 0.04 0.05 0.08 0.06
_rV_ Simplified 0.03 0.07 0.09 0.06
Detailed 0.02 0.04 0.04 0.03
None 0.02 0.06 0.10 0.05
_rO_ Simplified 0.02 0.03 0.08 0.04
Detailed 0.02 0.02 0.04 0.02
##### Table 120: Statistical results for CREAK using Qwen2- 1.5B-Instruct on irrelevant responses.
##### Table 121: Statistical results for Sensemaking using Qwen2-1.5B-Instruct on irrelevant responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
Len 100 0.83 0.51 0.40 0.58
_sQ_ Len 500 0.73 0.33 0.29 0.42
Len 1000 0.65 0.28 0.26 0.36
Unpopular 0.86 0.60 0.45 0.61
Len 100 0.98 0.53 0.54 0.67
_sK_ Len 500 0.60 0.27 0.27 0.35
Len 1000 0.49 0.20 0.21 0.27
Unpopular 1.22 0.65 0.72 0.79
Len 100 5.79 2.01 1.42 2.70
_sV_ Len 500 3.77 1.20 1.07 1.74
Len 1000 3.07 0.91 0.89 1.39
Unpopular 8.64 2.27 1.33 3.42
Len 100 4.06 2.07 0.98 2.22
_sO_ Len 500 2.84 1.56 0.82 1.63
Wiki Len 1000 2.45 1.35 0.76 1.42
Unpopular 5.89 2.10 0.99 2.65
Len 100 0.02 0.02 0.02 0.02
_rQ_ Len 500 0.02 0.02 0.01 0.02
Len 1000 0.03 0.01 0.01 0.02
Unpopular 0.01 0.02 0.02 0.02
Len 100 0.03 0.02 0.02 0.03
_rK_ Len 500 0.02 0.03 0.02 0.02
Len 1000 0.02 0.03 0.01 0.02
Unpopular 0.04 0.03 0.02 0.03
Len 100 0.03 0.06 0.09 0.06
_rV_ Len 500 0.02 0.05 0.08 0.05
Len 1000 0.02 0.04 0.07 0.04
Unpopular 0.03 0.07 0.09 0.06
Len 100 0.02 0.02 0.07 0.03
_rO_ Len 500 0.01 0.01 0.06 0.02
Len 1000 0.01 0.01 0.05 0.02
Unpopular 0.01 0.03 0.08 0.04
##### Table 122: Statistical results for Wiki using Qwen2- 1.5B-Instruct on irrelevant responses.
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 10.37 10.21 6.14 9.12 sQ Simplified 2.27 1.89 4.72 2.70 Detailed 0.68 0.53 0.33 0.53 None 14.60 18.67 16.06 15.90 sK Simplified 2.58 3.23 5.85 3.54 Detailed 0.58 0.39 0.69 0.48 None 74.28 33.01 8.01 34.90 sV Simplified 16.19 7.40 3.77 8.12 Detailed 3.52 1.35 0.89 1.68 None 48.66 30.51 4.55 27.25 CREAK sO Simplified 11.07 7.13 2.50 6.58 Detailed 2.40 1.79 0.74 1.60 None 0.02 0.07 0.14 0.08 rQ Simplified 0.01 0.03 0.16 0.06 Detailed 0.01 0.01 0.03 0.02 None 0.03 0.06 0.07 0.05 rK Simplified 0.03 0.05 0.08 0.05 Detailed 0.02 0.03 0.04 0.03 None 0.04 0.05 0.08 0.06 rV Simplified 0.03 0.07 0.09 0.06 Detailed 0.02 0.04 0.04 0.03 None 0.02 0.06 0.10 0.05 rO Simplified 0.02 0.03 0.08 0.04 Detailed 0.02 0.02 0.04 0.02|sQ|None Simplified Detailed|10.37 10.21 6.14 9.12 2.27 1.89 4.72 2.70 0.68 0.53 0.33 0.53|
||sK|None Simplified Detailed|14.60 18.67 16.06 15.90 2.58 3.23 5.85 3.54 0.58 0.39 0.69 0.48|
||sV|None Simplified Detailed|74.28 33.01 8.01 34.90 16.19 7.40 3.77 8.12 3.52 1.35 0.89 1.68|
||sO|None Simplified Detailed|48.66 30.51 4.55 27.25 11.07 7.13 2.50 6.58 2.40 1.79 0.74 1.60|
||rQ|None Simplified Detailed|0.02 0.07 0.14 0.08 0.01 0.03 0.16 0.06 0.01 0.01 0.03 0.02|
||rK|None Simplified Detailed|0.03 0.06 0.07 0.05 0.03 0.05 0.08 0.05 0.02 0.03 0.04 0.03|
||rV|None Simplified Detailed|0.04 0.05 0.08 0.06 0.03 0.07 0.09 0.06 0.02 0.04 0.04 0.03|
||rO|None Simplified Detailed|0.02 0.06 0.10 0.05 0.02 0.03 0.08 0.04 0.02 0.02 0.04 0.02|
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|Len 100 0.83 0.51 0.40 0.58 sQ Len 500 0.73 0.33 0.29 0.42 Len 1000 0.65 0.28 0.26 0.36 Unpopular 0.86 0.60 0.45 0.61 Len 100 0.98 0.53 0.54 0.67 sK Len 500 0.60 0.27 0.27 0.35 Len 1000 0.49 0.20 0.21 0.27 Unpopular 1.22 0.65 0.72 0.79 Len 100 5.79 2.01 1.42 2.70 sV Len 500 3.77 1.20 1.07 1.74 Len 1000 3.07 0.91 0.89 1.39 Unpopular 8.64 2.27 1.33 3.42 Len 100 4.06 2.07 0.98 2.22 sO Len 500 2.84 1.56 0.82 1.63 Wiki Len 1000 2.45 1.35 0.76 1.42 Unpopular 5.89 2.10 0.99 2.65 Len 100 0.02 0.02 0.02 0.02 rQ Len 500 0.02 0.02 0.01 0.02 Len 1000 0.03 0.01 0.01 0.02 Unpopular 0.01 0.02 0.02 0.02 Len 100 0.03 0.02 0.02 0.03 rK Len 500 0.02 0.03 0.02 0.02 Len 1000 0.02 0.03 0.01 0.02 Unpopular 0.04 0.03 0.02 0.03 Len 100 0.03 0.06 0.09 0.06 rV Len 500 0.02 0.05 0.08 0.05 Len 1000 0.02 0.04 0.07 0.04 Unpopular 0.03 0.07 0.09 0.06 Len 100 0.02 0.02 0.07 0.03 rO Len 500 0.01 0.01 0.06 0.02 Len 1000 0.01 0.01 0.05 0.02 Unpopular 0.01 0.03 0.08 0.04|sQ|Len 100 Len 500 Len 1000 Unpopular|0.83 0.51 0.40 0.58 0.73 0.33 0.29 0.42 0.65 0.28 0.26 0.36 0.86 0.60 0.45 0.61|
||sK|Len 100 Len 500 Len 1000 Unpopular|0.98 0.53 0.54 0.67 0.60 0.27 0.27 0.35 0.49 0.20 0.21 0.27 1.22 0.65 0.72 0.79|
||sV|Len 100 Len 500 Len 1000 Unpopular|5.79 2.01 1.42 2.70 3.77 1.20 1.07 1.74 3.07 0.91 0.89 1.39 8.64 2.27 1.33 3.42|
||sO|Len 100 Len 500 Len 1000 Unpopular|4.06 2.07 0.98 2.22 2.84 1.56 0.82 1.63 2.45 1.35 0.76 1.42 5.89 2.10 0.99 2.65|
||rQ|Len 100 Len 500 Len 1000 Unpopular|0.02 0.02 0.02 0.02 0.02 0.02 0.01 0.02 0.03 0.01 0.01 0.02 0.01 0.02 0.02 0.02|
||rK|Len 100 Len 500 Len 1000 Unpopular|0.03 0.02 0.02 0.03 0.02 0.03 0.02 0.02 0.02 0.03 0.01 0.02 0.04 0.03 0.02 0.03|
||rV|Len 100 Len 500 Len 1000 Unpopular|0.03 0.06 0.09 0.06 0.02 0.05 0.08 0.05 0.02 0.04 0.07 0.04 0.03 0.07 0.09 0.06|
||rO|Len 100 Len 500 Len 1000 Unpopular|0.02 0.02 0.07 0.03 0.01 0.01 0.06 0.02 0.01 0.01 0.05 0.02 0.01 0.03 0.08 0.04|
-----
### E Results on Llama-2-7B-hf
#### E.1 Pre-trained LLM on Correct Responses
E.1.1 Reasoning Tasks
The visualizations and statistical results on MATH tasks: MATH-Algebra (Figure 135, Table 123), MATH-Counting (Figure 136, Table 124), MATH- Geometry (Figure 137, Table 125).
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
Simplified 0.27 0.19 0.08 0.18
_sQ_ Detailed 0.17 0.11 0.05 0.11
Simplified 0.26 0.19 0.12 0.19
_sK_ Detailed 0.17 0.13 0.08 0.12
Simplified 1.24 0.62 0.18 0.65
_sV_ Detailed 0.72 0.40 0.09 0.39
Algebra Simplified 0.66 0.59 0.17 0.45
_sO_ Detailed 0.44 0.41 0.10 0.30
Simplified 0.03 0.01 0.01 0.01
_rQ_ Detailed 0.04 0.00 0.01 0.02
Simplified 0.03 0.01 0.01 0.02
_rK_ Detailed 0.03 0.01 0.01 0.01
Simplified 0.04 0.01 0.01 0.02
_rV_ Detailed 0.04 0.01 0.01 0.02
Simplified 0.02 0.00 0.01 0.01
_rO_ Detailed 0.02 0.00 0.01 0.01
##### Table 123: Statistical results for MATH-Algebra using Llama-2-7b-hf on correct responses.
Figure 135: Visualization for MATH-Algebra using Llama-2-7b-hf on correct responses.
#### The visualizations and statistical results on other reasoning tasks: AQuA (Figure 138, Table 126), GSM8K (Figure 139, Table 127), StrategyQA (Fig- ure 140, Table 128), ECQA (Figure 141, Table
129), CREAK (Figure 142, Table 130), Sensemak- ing (Figure 143, Table 131).
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|Simplified 0.26 0.16 0.11 0.18 sQ Detailed 0.17 0.11 0.07 0.11 Simplified 0.25 0.17 0.14 0.19 sK Detailed 0.16 0.12 0.09 0.12 Simplified 1.25 0.56 0.20 0.64 sV Detailed 0.74 0.37 0.11 0.39 Counting Simplified 0.73 0.59 0.18 0.47 sO Detailed 0.50 0.39 0.11 0.31 Simplified 0.04 0.00 0.01 0.01 rQ Detailed 0.04 0.01 0.01 0.02 Simplified 0.03 0.01 0.01 0.01 rK Detailed 0.03 0.01 0.01 0.01 Simplified 0.04 0.01 0.01 0.02 rV Detailed 0.04 0.01 0.01 0.02 Simplified 0.02 0.00 0.01 0.01 rO Detailed 0.01 0.00 0.01 0.01|sQ|Simplified Detailed|0.26 0.16 0.11 0.18 0.17 0.11 0.07 0.11|
||sK|Simplified Detailed|0.25 0.17 0.14 0.19 0.16 0.12 0.09 0.12|
||sV|Simplified Detailed|1.25 0.56 0.20 0.64 0.74 0.37 0.11 0.39|
||sO|Simplified Detailed|0.73 0.59 0.18 0.47 0.50 0.39 0.11 0.31|
||rQ|Simplified Detailed|0.04 0.00 0.01 0.01 0.04 0.01 0.01 0.02|
||rK|Simplified Detailed|0.03 0.01 0.01 0.01 0.03 0.01 0.01 0.01|
##### Table 124: Statistical results for MATH-Counting using Llama-2-7b-hf on correct responses.
Figure 136: Visualization for MATH-Counting using Llama-2-7b-hf on correct responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
Simplified 0.25 0.14 0.09 0.17
_sQ_ Detailed 0.18 0.11 0.07 0.12
Simplified 0.25 0.15 0.12 0.18
_sK_ Detailed 0.17 0.12 0.08 0.13
Simplified 1.16 0.50 0.16 0.58
_sV_ Detailed 0.79 0.40 0.12 0.42
Geometry Simplified 0.66 0.52 0.15 0.42
_sO_ Detailed 0.53 0.41 0.12 0.33
Simplified 0.04 0.01 0.01 0.02
_rQ_ Detailed 0.04 0.01 0.01 0.02
Simplified 0.02 0.01 0.01 0.01
_rK_ Detailed 0.03 0.01 0.01 0.01
Simplified 0.04 0.01 0.01 0.02
_rV_ Detailed 0.04 0.01 0.01 0.02
Simplified 0.02 0.00 0.01 0.01
_rO_ Detailed 0.01 0.00 0.01 0.01
##### Table 125: Statistical results for MATH-Geometry using Llama-2-7b-hf on correct responses.
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|Simplified 0.27 0.19 0.08 0.18 sQ Detailed 0.17 0.11 0.05 0.11 Simplified 0.26 0.19 0.12 0.19 sK Detailed 0.17 0.13 0.08 0.12 Simplified 1.24 0.62 0.18 0.65 sV Detailed 0.72 0.40 0.09 0.39 Algebra Simplified 0.66 0.59 0.17 0.45 sO Detailed 0.44 0.41 0.10 0.30 Simplified 0.03 0.01 0.01 0.01 rQ Detailed 0.04 0.00 0.01 0.02 Simplified 0.03 0.01 0.01 0.02 rK Detailed 0.03 0.01 0.01 0.01 Simplified 0.04 0.01 0.01 0.02 rV Detailed 0.04 0.01 0.01 0.02 Simplified 0.02 0.00 0.01 0.01 rO Detailed 0.02 0.00 0.01 0.01||||
||sK|Simplified Detailed|0.26 0.19 0.12 0.19 0.17 0.13 0.08 0.12|
||sV|Simplified Detailed|1.24 0.62 0.18 0.65 0.72 0.40 0.09 0.39|
||sO|Simplified Detailed|0.66 0.59 0.17 0.45 0.44 0.41 0.10 0.30|
||rQ|Simplified Detailed|0.03 0.01 0.01 0.01 0.04 0.00 0.01 0.02|
||rK|Simplified Detailed|0.03 0.01 0.01 0.02 0.03 0.01 0.01 0.01|
||rV|Simplified Detailed|0.04 0.01 0.01 0.02 0.04 0.01 0.01 0.02|
||rO|Simplified Detailed|0.02 0.00 0.01 0.01 0.02 0.00 0.01 0.01|
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|Simplified 0.25 0.14 0.09 0.17 sQ Detailed 0.18 0.11 0.07 0.12 Simplified 0.25 0.15 0.12 0.18 sK Detailed 0.17 0.12 0.08 0.13 Simplified 1.16 0.50 0.16 0.58 sV Detailed 0.79 0.40 0.12 0.42 Geometry Simplified 0.66 0.52 0.15 0.42 sO Detailed 0.53 0.41 0.12 0.33 Simplified 0.04 0.01 0.01 0.02 rQ Detailed 0.04 0.01 0.01 0.02 Simplified 0.02 0.01 0.01 0.01 rK Detailed 0.03 0.01 0.01 0.01 Simplified 0.04 0.01 0.01 0.02 rV Detailed 0.04 0.01 0.01 0.02 Simplified 0.02 0.00 0.01 0.01 rO Detailed 0.01 0.00 0.01 0.01|sQ|Simplified Detailed|0.25 0.14 0.09 0.17 0.18 0.11 0.07 0.12|
||sK|Simplified Detailed|0.25 0.15 0.12 0.18 0.17 0.12 0.08 0.13|
||sV|Simplified Detailed|1.16 0.50 0.16 0.58 0.79 0.40 0.12 0.42|
||sO|Simplified Detailed|0.66 0.52 0.15 0.42 0.53 0.41 0.12 0.33|
||rQ|Simplified Detailed|0.04 0.01 0.01 0.02 0.04 0.01 0.01 0.02|
||rK|Simplified Detailed|0.02 0.01 0.01 0.01 0.03 0.01 0.01 0.01|
||rV|Simplified Detailed|0.04 0.01 0.01 0.02 0.04 0.01 0.01 0.02|
||rO|Simplified Detailed|0.02 0.00 0.01 0.01 0.01 0.00 0.01 0.01|
-----
##### Figure 137: Visualization for MATH-Geometry using Llama-2-7b-hf on correct responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
None 0.86 0.77 1.00 0.88
_sQ_ Simplified 0.48 0.33 0.18 0.33
Detailed 0.17 0.12 0.06 0.11
None 1.16 0.95 1.77 1.28
_sK_ Simplified 0.45 0.33 0.28 0.35
Detailed 0.16 0.12 0.08 0.12
None 13.84 1.70 0.64 4.81
_sV_ Simplified 2.43 0.91 0.27 1.11
Detailed 0.75 0.36 0.09 0.38
None 4.38 1.26 0.42 1.89
AQuA _sO_ Simplified 1.37 0.78 0.21 0.73
Detailed 0.51 0.37 0.09 0.30
None 0.07 0.05 0.13 0.08
_rQ_ Simplified 0.03 0.01 0.01 0.02
Detailed 0.03 0.00 0.01 0.02
None 0.04 0.02 0.08 0.04
_rK_ Simplified 0.03 0.01 0.01 0.02
Detailed 0.03 0.01 0.01 0.01
None 0.07 0.02 0.03 0.04
_rV_ Simplified 0.06 0.01 0.01 0.02
Detailed 0.04 0.01 0.01 0.02
None 0.04 0.03 0.08 0.05
_rO_ Simplified 0.02 0.01 0.02 0.02
Detailed 0.02 0.00 0.01 0.01
##### Table 126: Statistical results for AQuA using Llama-2- 7b-hf on correct responses.
#### E.1.2 Wiki Tasks
The visualizations and statistical results on Wiki tasks are shown in Figure 144 and Table 132.
E.2 Pre-trained LLM on Irrelevant Responses
E.2.1 Reasoning Tasks
The visualizations and statistical results on MATH tasks: MATH-Algebra (Figure 145, Table 133), MATH-Counting (Figure 146, Table 134), MATH- Geometry (Figure 147, Table 135). The visualizations and statistical results on other
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 1.17 0.57 0.69 0.81 sQ Simplified 0.29 0.19 0.10 0.19 Detailed 0.14 0.09 0.06 0.10 None 0.97 0.65 1.09 0.92 sK Simplified 0.28 0.18 0.14 0.20 Detailed 0.13 0.10 0.08 0.11 None 7.00 1.77 1.23 3.04 sV Simplified 1.38 0.59 0.16 0.67 Detailed 0.74 0.31 0.08 0.36 None 2.52 1.27 0.87 1.50 GSM8K sO Simplified 0.66 0.54 0.13 0.42 Detailed 0.43 0.32 0.09 0.26 None 0.03 0.03 0.06 0.04 rQ Simplified 0.03 0.01 0.01 0.02 Detailed 0.04 0.01 0.02 0.02 None 0.04 0.02 0.05 0.04 rK Simplified 0.03 0.01 0.01 0.02 Detailed 0.04 0.01 0.02 0.02 None 0.06 0.01 0.03 0.03 rV Simplified 0.05 0.01 0.01 0.02 Detailed 0.04 0.01 0.01 0.02 None 0.05 0.02 0.06 0.04 rO Simplified 0.03 0.01 0.02 0.02 Detailed 0.02 0.00 0.01 0.01|sQ|None Simplified Detailed|1.17 0.57 0.69 0.81 0.29 0.19 0.10 0.19 0.14 0.09 0.06 0.10|
||sK|None Simplified Detailed|0.97 0.65 1.09 0.92 0.28 0.18 0.14 0.20 0.13 0.10 0.08 0.11|
||sV|None Simplified Detailed|7.00 1.77 1.23 3.04 1.38 0.59 0.16 0.67 0.74 0.31 0.08 0.36|
||sO|None Simplified Detailed|2.52 1.27 0.87 1.50 0.66 0.54 0.13 0.42 0.43 0.32 0.09 0.26|
||rQ|None Simplified Detailed|0.03 0.03 0.06 0.04 0.03 0.01 0.01 0.02 0.04 0.01 0.02 0.02|
||rK|None Simplified Detailed|0.04 0.02 0.05 0.04 0.03 0.01 0.01 0.02 0.04 0.01 0.02 0.02|
||rV|None Simplified Detailed|0.06 0.01 0.03 0.03 0.05 0.01 0.01 0.02 0.04 0.01 0.01 0.02|
||rO|None Simplified Detailed|0.05 0.02 0.06 0.04 0.03 0.01 0.02 0.02 0.02 0.00 0.01 0.01|
##### Table 127: Statistical results for GSM8K using Llama- 2-7b-hf on correct responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
None 1.84 1.74 0.82 1.39
_sQ_ Simplified 0.34 0.17 0.13 0.22
Detailed 0.20 0.10 0.08 0.12
None 1.85 2.14 2.49 2.05
_sK_ Simplified 0.33 0.14 0.14 0.20
Detailed 0.19 0.10 0.07 0.12
None 15.66 3.58 3.69 7.06
_sV_ Simplified 2.19 0.52 0.31 0.94
Detailed 0.90 0.36 0.19 0.46
None 5.49 1.91 4.27 3.86
StrategyQA _sO_ Simplified 0.98 0.38 0.29 0.52
Detailed 0.58 0.39 0.19 0.37
None 0.04 0.08 0.08 0.07
_rQ_ Simplified 0.02 0.00 0.01 0.01
Detailed 0.03 0.01 0.01 0.01
None 0.02 0.07 0.06 0.05
_rK_ Simplified 0.03 0.00 0.01 0.01
Detailed 0.03 0.01 0.01 0.02
None 0.06 0.04 0.07 0.06
_rV_ Simplified 0.05 0.01 0.02 0.03
Detailed 0.04 0.01 0.01 0.02
None 0.05 0.04 0.07 0.05
_rO_ Simplified 0.03 0.01 0.01 0.02
Detailed 0.02 0.01 0.01 0.01
##### Table 128: Statistical results for StrategyQA using Llama-2-7b-hf on correct responses.
#### reasoning tasks: AQuA (Figure 148, Table 136), GSM8K (Figure 149, Table 137), StrategyQA (Fig- ure 150, Table 138), ECQA (Figure 151, Table
139), CREAK (Figure 152, Table 140), Sensemak- ing (Figure 153, Table 141).
E.2.2 Wiki Tasks The visualizations and statistical results on Wiki tasks are shown in Figure 154 and Table 142.
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 0.86 0.77 1.00 0.88 sQ Simplified 0.48 0.33 0.18 0.33 Detailed 0.17 0.12 0.06 0.11 None 1.16 0.95 1.77 1.28 sK Simplified 0.45 0.33 0.28 0.35 Detailed 0.16 0.12 0.08 0.12 None 13.84 1.70 0.64 4.81 sV Simplified 2.43 0.91 0.27 1.11 Detailed 0.75 0.36 0.09 0.38 None 4.38 1.26 0.42 1.89 AQuA sO Simplified 1.37 0.78 0.21 0.73 Detailed 0.51 0.37 0.09 0.30 None 0.07 0.05 0.13 0.08 rQ Simplified 0.03 0.01 0.01 0.02 Detailed 0.03 0.00 0.01 0.02 None 0.04 0.02 0.08 0.04 rK Simplified 0.03 0.01 0.01 0.02 Detailed 0.03 0.01 0.01 0.01 None 0.07 0.02 0.03 0.04 rV Simplified 0.06 0.01 0.01 0.02 Detailed 0.04 0.01 0.01 0.02 None 0.04 0.03 0.08 0.05 rO Simplified 0.02 0.01 0.02 0.02 Detailed 0.02 0.00 0.01 0.01||||
||sK|None Simplified Detailed|1.16 0.95 1.77 1.28 0.45 0.33 0.28 0.35 0.16 0.12 0.08 0.12|
||sV|None Simplified Detailed|13.84 1.70 0.64 4.81 2.43 0.91 0.27 1.11 0.75 0.36 0.09 0.38|
||sO|None Simplified Detailed|4.38 1.26 0.42 1.89 1.37 0.78 0.21 0.73 0.51 0.37 0.09 0.30|
||rQ|None Simplified Detailed|0.07 0.05 0.13 0.08 0.03 0.01 0.01 0.02 0.03 0.00 0.01 0.02|
||rK|None Simplified Detailed|0.04 0.02 0.08 0.04 0.03 0.01 0.01 0.02 0.03 0.01 0.01 0.01|
||rV|None Simplified Detailed|0.07 0.02 0.03 0.04 0.06 0.01 0.01 0.02 0.04 0.01 0.01 0.02|
||rO|None Simplified Detailed|0.04 0.03 0.08 0.05 0.02 0.01 0.02 0.02 0.02 0.00 0.01 0.01|
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 1.84 1.74 0.82 1.39 sQ Simplified 0.34 0.17 0.13 0.22 Detailed 0.20 0.10 0.08 0.12 None 1.85 2.14 2.49 2.05 sK Simplified 0.33 0.14 0.14 0.20 Detailed 0.19 0.10 0.07 0.12 None 15.66 3.58 3.69 7.06 sV Simplified 2.19 0.52 0.31 0.94 Detailed 0.90 0.36 0.19 0.46 None 5.49 1.91 4.27 3.86 StrategyQA sO Simplified 0.98 0.38 0.29 0.52 Detailed 0.58 0.39 0.19 0.37 None 0.04 0.08 0.08 0.07 rQ Simplified 0.02 0.00 0.01 0.01 Detailed 0.03 0.01 0.01 0.01 None 0.02 0.07 0.06 0.05 rK Simplified 0.03 0.00 0.01 0.01 Detailed 0.03 0.01 0.01 0.02 None 0.06 0.04 0.07 0.06 rV Simplified 0.05 0.01 0.02 0.03 Detailed 0.04 0.01 0.01 0.02 None 0.05 0.04 0.07 0.05 rO Simplified 0.03 0.01 0.01 0.02 Detailed 0.02 0.01 0.01 0.01|sQ|None Simplified Detailed|1.84 1.74 0.82 1.39 0.34 0.17 0.13 0.22 0.20 0.10 0.08 0.12|
||sK|None Simplified Detailed|1.85 2.14 2.49 2.05 0.33 0.14 0.14 0.20 0.19 0.10 0.07 0.12|
||sV|None Simplified Detailed|15.66 3.58 3.69 7.06 2.19 0.52 0.31 0.94 0.90 0.36 0.19 0.46|
||sO|None Simplified Detailed|5.49 1.91 4.27 3.86 0.98 0.38 0.29 0.52 0.58 0.39 0.19 0.37|
||rQ|None Simplified Detailed|0.04 0.08 0.08 0.07 0.02 0.00 0.01 0.01 0.03 0.01 0.01 0.01|
||rK|None Simplified Detailed|0.02 0.07 0.06 0.05 0.03 0.00 0.01 0.01 0.03 0.01 0.01 0.02|
||rV|None Simplified Detailed|0.06 0.04 0.07 0.06 0.05 0.01 0.02 0.03 0.04 0.01 0.01 0.02|
||rO|None Simplified Detailed|0.05 0.04 0.07 0.05 0.03 0.01 0.01 0.02 0.02 0.01 0.01 0.01|
-----
##### Figure 138: Visualization for AQuA using Llama-2-7b-hf on correct responses.
Figure 139: Visualization for GSM8K using Llama-2-7b-hf on correct responses.
-----
##### Figure 140: Visualization for StrategyQA using Llama-2-7b-hf on correct responses.
Figure 141: Visualization for ECQA using Llama-2-7b-hf on correct responses.
-----
##### Figure 142: Visualization for CREAK using Llama-2-7b-hf on correct responses.
Figure 143: Visualization for Sensemaking using Llama-2-7b-hf on correct responses.
-----
##### Figure 144: Visualization for Wiki tasks using Llama-2-7b-hf on correct responses.
-----
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 0.42 0.44 0.55 0.48 sQ Simplified 0.54 0.34 0.30 0.40 Detailed 0.22 0.12 0.07 0.13 None 0.54 0.44 0.96 0.65 sK Simplified 0.49 0.27 0.46 0.41 Detailed 0.21 0.13 0.07 0.13 None 5.18 0.90 0.50 1.99 sV Simplified 2.87 1.01 0.63 1.39 Detailed 0.99 0.45 0.17 0.51 None 1.86 0.70 0.60 1.02 Sensemaking sO Simplified 1.76 0.77 0.46 0.93 Detailed 0.69 0.43 0.18 0.40 None 0.05 0.04 0.12 0.07 rQ Simplified 0.03 0.01 0.04 0.03 Detailed 0.04 0.00 0.01 0.02 None 0.04 0.02 0.07 0.04 rK Simplified 0.03 0.01 0.03 0.02 Detailed 0.03 0.01 0.01 0.02 None 0.06 0.02 0.04 0.04 rV Simplified 0.05 0.01 0.03 0.03 Detailed 0.04 0.01 0.01 0.02 None 0.03 0.02 0.06 0.04 rO Simplified 0.03 0.01 0.03 0.02 Detailed 0.02 0.00 0.01 0.01|sQ|None Simplified Detailed|0.42 0.44 0.55 0.48 0.54 0.34 0.30 0.40 0.22 0.12 0.07 0.13|
||sK|None Simplified Detailed|0.54 0.44 0.96 0.65 0.49 0.27 0.46 0.41 0.21 0.13 0.07 0.13|
||sV|None Simplified Detailed|5.18 0.90 0.50 1.99 2.87 1.01 0.63 1.39 0.99 0.45 0.17 0.51|
||sO|None Simplified Detailed|1.86 0.70 0.60 1.02 1.76 0.77 0.46 0.93 0.69 0.43 0.18 0.40|
||rQ|None Simplified Detailed|0.05 0.04 0.12 0.07 0.03 0.01 0.04 0.03 0.04 0.00 0.01 0.02|
||rK|None Simplified Detailed|0.04 0.02 0.07 0.04 0.03 0.01 0.03 0.02 0.03 0.01 0.01 0.02|
||rV|None Simplified Detailed|0.06 0.02 0.04 0.04 0.05 0.01 0.03 0.03 0.04 0.01 0.01 0.02|
||rO|None Simplified Detailed|0.03 0.02 0.06 0.04 0.03 0.01 0.03 0.02 0.02 0.00 0.01 0.01|
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 1.03 0.70 0.84 0.86 sQ Simplified 0.66 0.31 0.20 0.39 Detailed 0.27 0.14 0.11 0.17 None 1.34 0.70 2.12 1.39 sK Simplified 0.60 0.29 0.27 0.37 Detailed 0.25 0.14 0.11 0.16 None 11.00 1.60 1.88 4.40 sV Simplified 3.86 1.04 0.49 1.64 Detailed 1.18 0.51 0.26 0.62 None 4.11 1.24 1.45 2.17 ECQA sO Simplified 1.89 0.76 0.40 0.93 Detailed 0.76 0.49 0.25 0.47 None 0.05 0.01 0.06 0.04 rQ Simplified 0.03 0.01 0.02 0.02 Detailed 0.03 0.01 0.01 0.02 None 0.03 0.02 0.03 0.03 rK Simplified 0.03 0.01 0.02 0.02 Detailed 0.03 0.01 0.02 0.02 None 0.07 0.01 0.03 0.04 rV Simplified 0.05 0.01 0.02 0.02 Detailed 0.04 0.01 0.01 0.02 None 0.03 0.02 0.07 0.04 rO Simplified 0.04 0.01 0.02 0.02 Detailed 0.02 0.01 0.01 0.01|sQ|None Simplified Detailed|1.03 0.70 0.84 0.86 0.66 0.31 0.20 0.39 0.27 0.14 0.11 0.17|
||sK|None Simplified Detailed|1.34 0.70 2.12 1.39 0.60 0.29 0.27 0.37 0.25 0.14 0.11 0.16|
||sV|None Simplified Detailed|11.00 1.60 1.88 4.40 3.86 1.04 0.49 1.64 1.18 0.51 0.26 0.62|
||sO|None Simplified Detailed|4.11 1.24 1.45 2.17 1.89 0.76 0.40 0.93 0.76 0.49 0.25 0.47|
||rQ|None Simplified Detailed|0.05 0.01 0.06 0.04 0.03 0.01 0.02 0.02 0.03 0.01 0.01 0.02|
||rK|None Simplified Detailed|0.03 0.02 0.03 0.03 0.03 0.01 0.02 0.02 0.03 0.01 0.02 0.02|
||rV|None Simplified Detailed|0.07 0.01 0.03 0.04 0.05 0.01 0.02 0.02 0.04 0.01 0.01 0.02|
||rO|None Simplified Detailed|0.03 0.02 0.07 0.04 0.04 0.01 0.02 0.02 0.02 0.01 0.01 0.01|
##### Table 129: Statistical results for ECQA using Llama-2- 7b-hf on correct responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
None 1.45 1.02 1.87 1.48
_sQ_ Simplified 0.51 0.26 0.19 0.32
Detailed 0.26 0.13 0.09 0.15
None 2.07 1.27 3.70 2.37
_sK_ Simplified 0.45 0.23 0.25 0.31
Detailed 0.25 0.14 0.08 0.15
None 19.35 2.92 3.63 7.88
_sV_ Simplified 3.00 0.82 0.49 1.32
Detailed 1.20 0.47 0.20 0.59
None 6.51 2.33 3.32 3.97
CREAK _sO_ Simplified 1.48 0.57 0.40 0.76
Detailed 0.71 0.43 0.20 0.42
None 0.05 0.03 0.12 0.07
_rQ_ Simplified 0.03 0.01 0.01 0.01
Detailed 0.03 0.01 0.01 0.02
None 0.03 0.02 0.07 0.04
_rK_ Simplified 0.03 0.01 0.01 0.01
Detailed 0.03 0.01 0.01 0.02
None 0.07 0.02 0.05 0.05
_rV_ Simplified 0.05 0.01 0.02 0.03
Detailed 0.04 0.01 0.01 0.02
None 0.04 0.02 0.07 0.04
_rO_ Simplified 0.03 0.01 0.02 0.02
Detailed 0.02 0.01 0.01 0.01
##### Table 130: Statistical results for CREAK using Llama- 2-7b-hf on correct responses.
##### Table 131: Statistical results for Sensemaking using Llama-2-7b-hf on correct responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
Len 100 0.36 0.17 0.24 0.25
_sQ_ Len 500 0.27 0.13 0.19 0.19
Len 1000 0.23 0.12 0.16 0.16
Unpopular 0.36 0.21 0.29 0.30
Len 100 0.35 0.17 0.27 0.26
_sK_ Len 500 0.25 0.12 0.22 0.19
Len 1000 0.22 0.10 0.17 0.16
Unpopular 0.29 0.25 0.32 0.30
Len 100 2.19 0.49 0.29 0.91
_sV_ Len 500 1.35 0.42 0.29 0.64
Len 1000 1.05 0.40 0.28 0.55
Unpopular 3.83 0.44 0.45 1.42
Len 100 1.15 0.44 0.28 0.58
_sO_ Len 500 0.86 0.46 0.28 0.50
Wiki Len 1000 0.74 0.44 0.27 0.46
Unpopular 1.89 0.43 0.44 0.88
Len 100 0.02 0.01 0.01 0.01
_rQ_ Len 500 0.03 0.01 0.01 0.02
Len 1000 0.04 0.01 0.01 0.02
Unpopular 0.03 0.02 0.02 0.02
Len 100 0.03 0.02 0.01 0.02
_rK_ Len 500 0.02 0.01 0.01 0.01
Len 1000 0.02 0.01 0.01 0.01
Unpopular 0.04 0.02 0.01 0.02
Len 100 0.05 0.02 0.01 0.02
_rV_ Len 500 0.04 0.01 0.01 0.02
Len 1000 0.04 0.01 0.01 0.02
Unpopular 0.06 0.01 0.02 0.03
Len 100 0.03 0.01 0.01 0.02
_rO_ Len 500 0.02 0.01 0.01 0.01
Len 1000 0.02 0.00 0.01 0.01
Unpopular 0.02 0.02 0.02 0.02
##### Table 132: Statistical results for Wiki using Llama-2- 7b-hf on correct responses.
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 1.45 1.02 1.87 1.48 sQ Simplified 0.51 0.26 0.19 0.32 Detailed 0.26 0.13 0.09 0.15 None 2.07 1.27 3.70 2.37 sK Simplified 0.45 0.23 0.25 0.31 Detailed 0.25 0.14 0.08 0.15 None 19.35 2.92 3.63 7.88 sV Simplified 3.00 0.82 0.49 1.32 Detailed 1.20 0.47 0.20 0.59 None 6.51 2.33 3.32 3.97 CREAK sO Simplified 1.48 0.57 0.40 0.76 Detailed 0.71 0.43 0.20 0.42 None 0.05 0.03 0.12 0.07 rQ Simplified 0.03 0.01 0.01 0.01 Detailed 0.03 0.01 0.01 0.02 None 0.03 0.02 0.07 0.04 rK Simplified 0.03 0.01 0.01 0.01 Detailed 0.03 0.01 0.01 0.02 None 0.07 0.02 0.05 0.05 rV Simplified 0.05 0.01 0.02 0.03 Detailed 0.04 0.01 0.01 0.02 None 0.04 0.02 0.07 0.04 rO Simplified 0.03 0.01 0.02 0.02 Detailed 0.02 0.01 0.01 0.01|sQ|None Simplified Detailed|1.45 1.02 1.87 1.48 0.51 0.26 0.19 0.32 0.26 0.13 0.09 0.15|
||sK|None Simplified Detailed|2.07 1.27 3.70 2.37 0.45 0.23 0.25 0.31 0.25 0.14 0.08 0.15|
||sV|None Simplified Detailed|19.35 2.92 3.63 7.88 3.00 0.82 0.49 1.32 1.20 0.47 0.20 0.59|
||sO|None Simplified Detailed|6.51 2.33 3.32 3.97 1.48 0.57 0.40 0.76 0.71 0.43 0.20 0.42|
||rQ|None Simplified Detailed|0.05 0.03 0.12 0.07 0.03 0.01 0.01 0.01 0.03 0.01 0.01 0.02|
||rK|None Simplified Detailed|0.03 0.02 0.07 0.04 0.03 0.01 0.01 0.01 0.03 0.01 0.01 0.02|
||rV|None Simplified Detailed|0.07 0.02 0.05 0.05 0.05 0.01 0.02 0.03 0.04 0.01 0.01 0.02|
||rO|None Simplified Detailed|0.04 0.02 0.07 0.04 0.03 0.01 0.02 0.02 0.02 0.01 0.01 0.01|
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|Len 100 0.36 0.17 0.24 0.25 sQ Len 500 0.27 0.13 0.19 0.19 Len 1000 0.23 0.12 0.16 0.16 Unpopular 0.36 0.21 0.29 0.30 Len 100 0.35 0.17 0.27 0.26 sK Len 500 0.25 0.12 0.22 0.19 Len 1000 0.22 0.10 0.17 0.16 Unpopular 0.29 0.25 0.32 0.30 Len 100 2.19 0.49 0.29 0.91 sV Len 500 1.35 0.42 0.29 0.64 Len 1000 1.05 0.40 0.28 0.55 Unpopular 3.83 0.44 0.45 1.42 Len 100 1.15 0.44 0.28 0.58 sO Len 500 0.86 0.46 0.28 0.50 Wiki Len 1000 0.74 0.44 0.27 0.46 Unpopular 1.89 0.43 0.44 0.88 Len 100 0.02 0.01 0.01 0.01 rQ Len 500 0.03 0.01 0.01 0.02 Len 1000 0.04 0.01 0.01 0.02 Unpopular 0.03 0.02 0.02 0.02 Len 100 0.03 0.02 0.01 0.02 rK Len 500 0.02 0.01 0.01 0.01 Len 1000 0.02 0.01 0.01 0.01 Unpopular 0.04 0.02 0.01 0.02 Len 100 0.05 0.02 0.01 0.02 rV Len 500 0.04 0.01 0.01 0.02 Len 1000 0.04 0.01 0.01 0.02 Unpopular 0.06 0.01 0.02 0.03 Len 100 0.03 0.01 0.01 0.02 rO Len 500 0.02 0.01 0.01 0.01 Len 1000 0.02 0.00 0.01 0.01 Unpopular 0.02 0.02 0.02 0.02|sQ|Len 100 Len 500 Len 1000 Unpopular|0.36 0.17 0.24 0.25 0.27 0.13 0.19 0.19 0.23 0.12 0.16 0.16 0.36 0.21 0.29 0.30|
||sK|Len 100 Len 500 Len 1000 Unpopular|0.35 0.17 0.27 0.26 0.25 0.12 0.22 0.19 0.22 0.10 0.17 0.16 0.29 0.25 0.32 0.30|
||sV|Len 100 Len 500 Len 1000 Unpopular|2.19 0.49 0.29 0.91 1.35 0.42 0.29 0.64 1.05 0.40 0.28 0.55 3.83 0.44 0.45 1.42|
||sO|Len 100 Len 500 Len 1000 Unpopular|1.15 0.44 0.28 0.58 0.86 0.46 0.28 0.50 0.74 0.44 0.27 0.46 1.89 0.43 0.44 0.88|
||rQ|Len 100 Len 500 Len 1000 Unpopular|0.02 0.01 0.01 0.01 0.03 0.01 0.01 0.02 0.04 0.01 0.01 0.02 0.03 0.02 0.02 0.02|
||rK|Len 100 Len 500 Len 1000 Unpopular|0.03 0.02 0.01 0.02 0.02 0.01 0.01 0.01 0.02 0.01 0.01 0.01 0.04 0.02 0.01 0.02|
||rV|Len 100 Len 500 Len 1000 Unpopular|0.05 0.02 0.01 0.02 0.04 0.01 0.01 0.02 0.04 0.01 0.01 0.02 0.06 0.01 0.02 0.03|
||rO|Len 100 Len 500 Len 1000 Unpopular|0.03 0.01 0.01 0.02 0.02 0.01 0.01 0.01 0.02 0.00 0.01 0.01 0.02 0.02 0.02 0.02|
-----
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|Simplified 0.38 0.24 0.14 0.26 sQ Detailed 0.23 0.14 0.09 0.16 Simplified 0.39 0.24 0.17 0.27 sK Detailed 0.24 0.16 0.12 0.17 Simplified 1.83 0.72 0.27 0.89 sV Detailed 1.01 0.49 0.16 0.52 Algebra Simplified 0.97 0.70 0.25 0.61 sO Detailed 0.62 0.47 0.16 0.39 Simplified 0.03 0.01 0.01 0.02 rQ Detailed 0.04 0.01 0.01 0.02 Simplified 0.02 0.01 0.01 0.02 rK Detailed 0.03 0.01 0.01 0.01 Simplified 0.04 0.01 0.01 0.02 rV Detailed 0.04 0.01 0.01 0.02 Simplified 0.02 0.00 0.01 0.01 rO Detailed 0.02 0.00 0.01 0.01|sQ|Simplified Detailed|0.38 0.24 0.14 0.26 0.23 0.14 0.09 0.16|
||sK|Simplified Detailed|0.39 0.24 0.17 0.27 0.24 0.16 0.12 0.17|
||sV|Simplified Detailed|1.83 0.72 0.27 0.89 1.01 0.49 0.16 0.52|
||sO|Simplified Detailed|0.97 0.70 0.25 0.61 0.62 0.47 0.16 0.39|
||rQ|Simplified Detailed|0.03 0.01 0.01 0.02 0.04 0.01 0.01 0.02|
||rK|Simplified Detailed|0.02 0.01 0.01 0.02 0.03 0.01 0.01 0.01|
||rV|Simplified Detailed|0.04 0.01 0.01 0.02 0.04 0.01 0.01 0.02|
||rO|Simplified Detailed|0.02 0.00 0.01 0.01 0.02 0.00 0.01 0.01|
##### Table 133: Statistical results for MATH-Algebra using Llama-2-7b-hf on irrelevant responses.
Figure 145: Visualization for MATH-Algebra using Llama-2-7b-hf on irrelevant responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
Simplified 0.32 0.20 0.14 0.23
_sQ_ Detailed 0.25 0.13 0.10 0.16
Simplified 0.33 0.20 0.16 0.23
_sK_ Detailed 0.26 0.14 0.12 0.17
Simplified 1.62 0.62 0.32 0.80
_sV_ Detailed 1.13 0.46 0.20 0.56
Counting Simplified 0.91 0.63 0.29 0.57
_sO_ Detailed 0.69 0.46 0.18 0.42
Simplified 0.03 0.01 0.00 0.01
_rQ_ Detailed 0.04 0.01 0.01 0.02
Simplified 0.02 0.01 0.01 0.01
_rK_ Detailed 0.03 0.01 0.01 0.01
Simplified 0.04 0.01 0.01 0.02
_rV_ Detailed 0.04 0.01 0.01 0.02
Simplified 0.02 0.00 0.01 0.01
_rO_ Detailed 0.02 0.00 0.01 0.01
##### Table 134: Statistical results for MATH-Counting using Llama-2-7b-hf on irrelevant responses.
##### Figure 146: Visualization for MATH-Counting using Llama-2-7b-hf on irrelevant responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
Simplified 0.32 0.16 0.14 0.21
_sQ_ Detailed 0.29 0.11 0.14 0.18
Simplified 0.33 0.17 0.15 0.22
_sK_ Detailed 0.29 0.12 0.18 0.20
Simplified 1.50 0.57 0.25 0.73
_sV_ Detailed 1.18 0.44 0.24 0.57
Geometry Simplified 0.84 0.59 0.23 0.52
_sO_ Detailed 0.84 0.47 0.23 0.48
Simplified 0.04 0.01 0.01 0.02
_rQ_ Detailed 0.04 0.01 0.01 0.02
Simplified 0.02 0.01 0.01 0.01
_rK_ Detailed 0.02 0.01 0.01 0.01
Simplified 0.04 0.01 0.01 0.02
_rV_ Detailed 0.04 0.01 0.01 0.02
Simplified 0.02 0.00 0.01 0.01
_rO_ Detailed 0.02 0.00 0.01 0.01
##### Table 135: Statistical results for MATH-Geometry using Llama-2-7b-hf on irrelevant responses.
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|Simplified 0.32 0.16 0.14 0.21 sQ Detailed 0.29 0.11 0.14 0.18 Simplified 0.33 0.17 0.15 0.22 sK Detailed 0.29 0.12 0.18 0.20 Simplified 1.50 0.57 0.25 0.73 sV Detailed 1.18 0.44 0.24 0.57 Geometry Simplified 0.84 0.59 0.23 0.52 sO Detailed 0.84 0.47 0.23 0.48 Simplified 0.04 0.01 0.01 0.02 rQ Detailed 0.04 0.01 0.01 0.02 Simplified 0.02 0.01 0.01 0.01 rK Detailed 0.02 0.01 0.01 0.01 Simplified 0.04 0.01 0.01 0.02 rV Detailed 0.04 0.01 0.01 0.02 Simplified 0.02 0.00 0.01 0.01 rO Detailed 0.02 0.00 0.01 0.01|sQ|Simplified Detailed|0.32 0.16 0.14 0.21 0.29 0.11 0.14 0.18|
||sK|Simplified Detailed|0.33 0.17 0.15 0.22 0.29 0.12 0.18 0.20|
||sO|Simplified Detailed|0.84 0.59 0.23 0.52 0.84 0.47 0.23 0.48|
||rQ|Simplified Detailed|0.04 0.01 0.01 0.02 0.04 0.01 0.01 0.02|
||rK|Simplified Detailed|0.02 0.01 0.01 0.01 0.02 0.01 0.01 0.01|
||rV|Simplified Detailed|0.04 0.01 0.01 0.02 0.04 0.01 0.01 0.02|
||rO|Simplified Detailed|0.02 0.00 0.01 0.01 0.02 0.00 0.01 0.01|
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|Simplified 0.32 0.20 0.14 0.23 sQ Detailed 0.25 0.13 0.10 0.16 Simplified 0.33 0.20 0.16 0.23 sK Detailed 0.26 0.14 0.12 0.17 Simplified 1.62 0.62 0.32 0.80 sV Detailed 1.13 0.46 0.20 0.56 Counting Simplified 0.91 0.63 0.29 0.57 sO Detailed 0.69 0.46 0.18 0.42 Simplified 0.03 0.01 0.00 0.01 rQ Detailed 0.04 0.01 0.01 0.02 Simplified 0.02 0.01 0.01 0.01 rK Detailed 0.03 0.01 0.01 0.01 Simplified 0.04 0.01 0.01 0.02 rV Detailed 0.04 0.01 0.01 0.02 Simplified 0.02 0.00 0.01 0.01 rO Detailed 0.02 0.00 0.01 0.01|sQ|Simplified Detailed|0.32 0.20 0.14 0.23 0.25 0.13 0.10 0.16|
||sK|Simplified Detailed|0.33 0.20 0.16 0.23 0.26 0.14 0.12 0.17|
||sV|Simplified Detailed|1.62 0.62 0.32 0.80 1.13 0.46 0.20 0.56|
||sO|Simplified Detailed|0.91 0.63 0.29 0.57 0.69 0.46 0.18 0.42|
||rQ|Simplified Detailed|0.03 0.01 0.00 0.01 0.04 0.01 0.01 0.02|
||rK|Simplified Detailed|0.02 0.01 0.01 0.01 0.03 0.01 0.01 0.01|
||rV|Simplified Detailed|0.04 0.01 0.01 0.02 0.04 0.01 0.01 0.02|
||rO|Simplified Detailed|0.02 0.00 0.01 0.01 0.02 0.00 0.01 0.01|
-----
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 1.29 0.60 0.80 0.90 sQ Simplified 0.41 0.28 0.17 0.29 Detailed 0.26 0.15 0.12 0.18 None 1.06 0.64 1.26 1.00 sK Simplified 0.39 0.27 0.21 0.29 Detailed 0.25 0.16 0.14 0.18 None 7.23 1.79 1.27 3.13 sV Simplified 2.24 0.71 0.38 1.01 Detailed 1.35 0.47 0.23 0.63 None 2.64 1.31 0.89 1.55 GSM8K sO Simplified 1.02 0.65 0.35 0.64 Detailed 0.74 0.45 0.23 0.44 None 0.03 0.03 0.07 0.04 rQ Simplified 0.03 0.01 0.01 0.02 Detailed 0.04 0.01 0.01 0.02 None 0.04 0.01 0.06 0.04 rK Simplified 0.03 0.01 0.01 0.02 Detailed 0.03 0.01 0.01 0.02 None 0.06 0.01 0.03 0.03 rV Simplified 0.05 0.01 0.02 0.02 Detailed 0.04 0.01 0.01 0.02 None 0.05 0.02 0.06 0.04 rO Simplified 0.03 0.01 0.01 0.02 Detailed 0.02 0.00 0.01 0.01|sQ|None Simplified Detailed|1.29 0.60 0.80 0.90 0.41 0.28 0.17 0.29 0.26 0.15 0.12 0.18|
||sK|None Simplified Detailed|1.06 0.64 1.26 1.00 0.39 0.27 0.21 0.29 0.25 0.16 0.14 0.18|
||sV|None Simplified Detailed|7.23 1.79 1.27 3.13 2.24 0.71 0.38 1.01 1.35 0.47 0.23 0.63|
||sO|None Simplified Detailed|2.64 1.31 0.89 1.55 1.02 0.65 0.35 0.64 0.74 0.45 0.23 0.44|
||rQ|None Simplified Detailed|0.03 0.03 0.07 0.04 0.03 0.01 0.01 0.02 0.04 0.01 0.01 0.02|
||rK|None Simplified Detailed|0.04 0.01 0.06 0.04 0.03 0.01 0.01 0.02 0.03 0.01 0.01 0.02|
||rV|None Simplified Detailed|0.06 0.01 0.03 0.03 0.05 0.01 0.02 0.02 0.04 0.01 0.01 0.02|
||rO|None Simplified Detailed|0.05 0.02 0.06 0.04 0.03 0.01 0.01 0.02 0.02 0.00 0.01 0.01|
##### Figure 147: Visualization for MATH-Geometry using Llama-2-7b-hf on irrelevant responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
None 0.86 0.78 1.01 0.88
_sQ_ Simplified 0.54 0.35 0.23 0.38
Detailed 0.28 0.16 0.10 0.18
None 1.16 0.95 1.78 1.29
_sK_ Simplified 0.50 0.34 0.31 0.39
Detailed 0.29 0.18 0.13 0.19
None 13.76 1.71 0.64 4.79
_sV_ Simplified 2.93 0.89 0.43 1.28
Detailed 1.35 0.52 0.22 0.64
None 4.38 1.28 0.43 1.90
AQuA _sO_ Simplified 1.58 0.82 0.34 0.86
Detailed 0.79 0.48 0.21 0.47
None 0.07 0.05 0.13 0.08
_rQ_ Simplified 0.04 0.01 0.01 0.02
Detailed 0.04 0.01 0.01 0.02
None 0.04 0.02 0.08 0.04
_rK_ Simplified 0.03 0.01 0.01 0.02
Detailed 0.03 0.01 0.01 0.01
None 0.07 0.02 0.03 0.04
_rV_ Simplified 0.06 0.01 0.01 0.03
Detailed 0.04 0.01 0.01 0.02
None 0.04 0.03 0.08 0.05
_rO_ Simplified 0.02 0.01 0.02 0.02
Detailed 0.02 0.00 0.01 0.01
##### Table 136: Statistical results for AQuA using Llama-2- 7b-hf on irrelevant responses.
##### Table 137: Statistical results for GSM8K using Llama- 2-7b-hf on irrelevant responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
None 1.74 1.59 0.85 1.35
_sQ_ Simplified 0.52 0.25 0.17 0.31
Detailed 0.29 0.14 0.09 0.17
None 1.93 2.01 2.16 1.94
_sK_ Simplified 0.53 0.23 0.18 0.30
Detailed 0.29 0.14 0.08 0.16
None 16.91 3.74 3.63 7.44
_sV_ Simplified 3.33 0.65 0.52 1.35
Detailed 1.44 0.46 0.32 0.68
None 5.70 1.92 4.24 3.92
StrategyQA _sO_ Simplified 1.60 0.42 0.44 0.78
Detailed 0.91 0.39 0.33 0.51
None 0.05 0.08 0.08 0.07
_rQ_ Simplified 0.02 0.01 0.01 0.01
Detailed 0.03 0.01 0.01 0.01
None 0.02 0.06 0.05 0.04
_rK_ Simplified 0.03 0.01 0.01 0.02
Detailed 0.02 0.01 0.01 0.01
None 0.06 0.04 0.07 0.06
_rV_ Simplified 0.05 0.01 0.02 0.03
Detailed 0.04 0.01 0.01 0.02
None 0.05 0.04 0.07 0.05
_rO_ Simplified 0.03 0.01 0.01 0.02
Detailed 0.02 0.00 0.01 0.01
##### Table 138: Statistical results for StrategyQA using Llama-2-7b-hf on irrelevant responses.
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 0.86 0.78 1.01 0.88 sQ Simplified 0.54 0.35 0.23 0.38 Detailed 0.28 0.16 0.10 0.18 None 1.16 0.95 1.78 1.29 sK Simplified 0.50 0.34 0.31 0.39 Detailed 0.29 0.18 0.13 0.19 None 13.76 1.71 0.64 4.79 sV Simplified 2.93 0.89 0.43 1.28 Detailed 1.35 0.52 0.22 0.64 None 4.38 1.28 0.43 1.90 AQuA sO Simplified 1.58 0.82 0.34 0.86 Detailed 0.79 0.48 0.21 0.47 None 0.07 0.05 0.13 0.08 rQ Simplified 0.04 0.01 0.01 0.02 Detailed 0.04 0.01 0.01 0.02 None 0.04 0.02 0.08 0.04 rK Simplified 0.03 0.01 0.01 0.02 Detailed 0.03 0.01 0.01 0.01 None 0.07 0.02 0.03 0.04 rV Simplified 0.06 0.01 0.01 0.03 Detailed 0.04 0.01 0.01 0.02 None 0.04 0.03 0.08 0.05 rO Simplified 0.02 0.01 0.02 0.02 Detailed 0.02 0.00 0.01 0.01|sQ|None Simplified Detailed|0.86 0.78 1.01 0.88 0.54 0.35 0.23 0.38 0.28 0.16 0.10 0.18|
||sK|None Simplified Detailed|1.16 0.95 1.78 1.29 0.50 0.34 0.31 0.39 0.29 0.18 0.13 0.19|
||sV|None Simplified Detailed|13.76 1.71 0.64 4.79 2.93 0.89 0.43 1.28 1.35 0.52 0.22 0.64|
||sO|None Simplified Detailed|4.38 1.28 0.43 1.90 1.58 0.82 0.34 0.86 0.79 0.48 0.21 0.47|
||rQ|None Simplified Detailed|0.07 0.05 0.13 0.08 0.04 0.01 0.01 0.02 0.04 0.01 0.01 0.02|
||rK|None Simplified Detailed|0.04 0.02 0.08 0.04 0.03 0.01 0.01 0.02 0.03 0.01 0.01 0.01|
||rV|None Simplified Detailed|0.07 0.02 0.03 0.04 0.06 0.01 0.01 0.03 0.04 0.01 0.01 0.02|
||rO|None Simplified Detailed|0.04 0.03 0.08 0.05 0.02 0.01 0.02 0.02 0.02 0.00 0.01 0.01|
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 1.74 1.59 0.85 1.35 sQ Simplified 0.52 0.25 0.17 0.31 Detailed 0.29 0.14 0.09 0.17 None 1.93 2.01 2.16 1.94 sK Simplified 0.53 0.23 0.18 0.30 Detailed 0.29 0.14 0.08 0.16 None 16.91 3.74 3.63 7.44 sV Simplified 3.33 0.65 0.52 1.35 Detailed 1.44 0.46 0.32 0.68 None 5.70 1.92 4.24 3.92 StrategyQA sO Simplified 1.60 0.42 0.44 0.78 Detailed 0.91 0.39 0.33 0.51 None 0.05 0.08 0.08 0.07 rQ Simplified 0.02 0.01 0.01 0.01 Detailed 0.03 0.01 0.01 0.01 None 0.02 0.06 0.05 0.04 rK Simplified 0.03 0.01 0.01 0.02 Detailed 0.02 0.01 0.01 0.01 None 0.06 0.04 0.07 0.06 rV Simplified 0.05 0.01 0.02 0.03 Detailed 0.04 0.01 0.01 0.02 None 0.05 0.04 0.07 0.05 rO Simplified 0.03 0.01 0.01 0.02 Detailed 0.02 0.00 0.01 0.01|sQ|None Simplified Detailed|1.74 1.59 0.85 1.35 0.52 0.25 0.17 0.31 0.29 0.14 0.09 0.17|
||sK|None Simplified Detailed|1.93 2.01 2.16 1.94 0.53 0.23 0.18 0.30 0.29 0.14 0.08 0.16|
||sV|None Simplified Detailed|16.91 3.74 3.63 7.44 3.33 0.65 0.52 1.35 1.44 0.46 0.32 0.68|
||sO|None Simplified Detailed|5.70 1.92 4.24 3.92 1.60 0.42 0.44 0.78 0.91 0.39 0.33 0.51|
||rQ|None Simplified Detailed|0.05 0.08 0.08 0.07 0.02 0.01 0.01 0.01 0.03 0.01 0.01 0.01|
||rK|None Simplified Detailed|0.02 0.06 0.05 0.04 0.03 0.01 0.01 0.02 0.02 0.01 0.01 0.01|
||rV|None Simplified Detailed|0.06 0.04 0.07 0.06 0.05 0.01 0.02 0.03 0.04 0.01 0.01 0.02|
||rO|None Simplified Detailed|0.05 0.04 0.07 0.05 0.03 0.01 0.01 0.02 0.02 0.00 0.01 0.01|
-----
##### Figure 148: Visualization for AQuA using Llama-2-7b-hf on irrelevant responses.
Figure 149: Visualization for GSM8K using Llama-2-7b-hf on irrelevant responses.
-----
##### Figure 150: Visualization for StrategyQA using Llama-2-7b-hf on irrelevant responses.
Figure 151: Visualization for ECQA using Llama-2-7b-hf on irrelevant responses.
-----
##### Figure 152: Visualization for CREAK using Llama-2-7b-hf on irrelevant responses.
Figure 153: Visualization for Sensemaking using Llama-2-7b-hf on irrelevant responses.
-----
##### Figure 154: Visualization for Wiki tasks using Llama-2-7b-hf on irrelevant responses.
-----
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 0.44 0.46 0.56 0.50 sQ Simplified 0.69 0.51 0.33 0.52 Detailed 0.32 0.17 0.10 0.19 None 0.56 0.45 0.97 0.67 sK Simplified 0.66 0.55 0.48 0.56 Detailed 0.34 0.20 0.09 0.20 None 5.22 0.95 0.50 2.02 sV Simplified 3.68 1.17 0.81 1.71 Detailed 1.59 0.57 0.32 0.76 None 1.94 0.72 0.60 1.05 Sensemaking sO Simplified 2.12 0.82 0.58 1.11 Detailed 0.99 0.50 0.32 0.57 None 0.05 0.03 0.12 0.07 rQ Simplified 0.04 0.01 0.04 0.03 Detailed 0.03 0.01 0.02 0.02 None 0.04 0.02 0.07 0.04 rK Simplified 0.03 0.01 0.03 0.02 Detailed 0.02 0.01 0.02 0.02 None 0.06 0.02 0.04 0.04 rV Simplified 0.05 0.01 0.03 0.03 Detailed 0.04 0.01 0.01 0.02 None 0.03 0.02 0.06 0.04 rO Simplified 0.03 0.01 0.02 0.02 Detailed 0.02 0.00 0.01 0.01|sQ|None Simplified Detailed|0.44 0.46 0.56 0.50 0.69 0.51 0.33 0.52 0.32 0.17 0.10 0.19|
||sK|None Simplified Detailed|0.56 0.45 0.97 0.67 0.66 0.55 0.48 0.56 0.34 0.20 0.09 0.20|
||sV|None Simplified Detailed|5.22 0.95 0.50 2.02 3.68 1.17 0.81 1.71 1.59 0.57 0.32 0.76|
||sO|None Simplified Detailed|1.94 0.72 0.60 1.05 2.12 0.82 0.58 1.11 0.99 0.50 0.32 0.57|
||rQ|None Simplified Detailed|0.05 0.03 0.12 0.07 0.04 0.01 0.04 0.03 0.03 0.01 0.02 0.02|
||rK|None Simplified Detailed|0.04 0.02 0.07 0.04 0.03 0.01 0.03 0.02 0.02 0.01 0.02 0.02|
||rV|None Simplified Detailed|0.06 0.02 0.04 0.04 0.05 0.01 0.03 0.03 0.04 0.01 0.01 0.02|
||rO|None Simplified Detailed|0.03 0.02 0.06 0.04 0.03 0.01 0.02 0.02 0.02 0.00 0.01 0.01|
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 1.45 0.71 0.83 0.98 sQ Simplified 0.64 0.29 0.20 0.37 Detailed 0.35 0.18 0.12 0.21 None 1.58 0.70 1.97 1.39 sK Simplified 0.57 0.26 0.26 0.34 Detailed 0.34 0.18 0.11 0.20 None 12.22 1.92 2.34 5.00 sV Simplified 3.70 0.85 0.63 1.56 Detailed 1.60 0.59 0.39 0.79 None 6.02 1.47 1.81 2.95 ECQA sO Simplified 1.80 0.59 0.54 0.91 Detailed 0.98 0.49 0.38 0.58 None 0.05 0.01 0.06 0.04 rQ Simplified 0.03 0.01 0.01 0.01 Detailed 0.03 0.01 0.01 0.02 None 0.03 0.02 0.04 0.03 rK Simplified 0.03 0.01 0.01 0.02 Detailed 0.02 0.01 0.02 0.02 None 0.06 0.01 0.03 0.03 rV Simplified 0.05 0.01 0.02 0.02 Detailed 0.04 0.01 0.01 0.02 None 0.04 0.02 0.04 0.04 rO Simplified 0.03 0.01 0.03 0.02 Detailed 0.02 0.01 0.01 0.01|sQ|None Simplified Detailed|1.45 0.71 0.83 0.98 0.64 0.29 0.20 0.37 0.35 0.18 0.12 0.21|
||sK|None Simplified Detailed|1.58 0.70 1.97 1.39 0.57 0.26 0.26 0.34 0.34 0.18 0.11 0.20|
||sV|None Simplified Detailed|12.22 1.92 2.34 5.00 3.70 0.85 0.63 1.56 1.60 0.59 0.39 0.79|
||sO|None Simplified Detailed|6.02 1.47 1.81 2.95 1.80 0.59 0.54 0.91 0.98 0.49 0.38 0.58|
||rQ|None Simplified Detailed|0.05 0.01 0.06 0.04 0.03 0.01 0.01 0.01 0.03 0.01 0.01 0.02|
||rK|None Simplified Detailed|0.03 0.02 0.04 0.03 0.03 0.01 0.01 0.02 0.02 0.01 0.02 0.02|
||rV|None Simplified Detailed|0.06 0.01 0.03 0.03 0.05 0.01 0.02 0.02 0.04 0.01 0.01 0.02|
||rO|None Simplified Detailed|0.04 0.02 0.04 0.04 0.03 0.01 0.03 0.02 0.02 0.01 0.01 0.01|
##### Table 139: Statistical results for ECQA using Llama-2- 7b-hf on irrelevant responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
None 1.54 1.12 1.87 1.54
_sQ_ Simplified 0.62 0.37 0.23 0.41
Detailed 0.34 0.15 0.11 0.20
None 2.14 1.39 3.62 2.42
_sK_ Simplified 0.57 0.36 0.30 0.40
Detailed 0.34 0.16 0.11 0.19
None 18.53 3.19 3.71 7.75
_sV_ Simplified 3.81 0.96 0.65 1.63
Detailed 1.53 0.50 0.33 0.72
None 6.50 2.43 3.36 4.02
CREAK _sO_ Simplified 2.01 0.61 0.54 0.99
Detailed 0.91 0.43 0.32 0.53
None 0.05 0.03 0.12 0.07
_rQ_ Simplified 0.03 0.01 0.01 0.01
Detailed 0.03 0.01 0.01 0.01
None 0.03 0.02 0.08 0.04
_rK_ Simplified 0.03 0.01 0.01 0.02
Detailed 0.02 0.01 0.01 0.02
None 0.07 0.02 0.05 0.04
_rV_ Simplified 0.05 0.01 0.02 0.03
Detailed 0.04 0.01 0.01 0.02
None 0.04 0.02 0.07 0.04
_rO_ Simplified 0.03 0.01 0.02 0.02
Detailed 0.02 0.01 0.01 0.01
##### Table 140: Statistical results for CREAK using Llama- 2-7b-hf on irrelevant responses.
##### Table 141: Statistical results for Sensemaking using Llama-2-7b-hf on irrelevant responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
Len 100 0.51 0.22 0.23 0.31
_sQ_ Len 500 0.31 0.14 0.18 0.21
Len 1000 0.29 0.12 0.18 0.19
Unpopular 0.54 0.25 0.25 0.35
Len 100 0.50 0.22 0.26 0.31
_sK_ Len 500 0.30 0.12 0.20 0.20
Len 1000 0.28 0.11 0.20 0.19
Unpopular 0.53 0.26 0.27 0.35
Len 100 3.20 0.64 0.37 1.26
_sV_ Len 500 2.01 0.47 0.37 0.86
Len 1000 1.72 0.45 0.37 0.77
Unpopular 5.84 0.80 0.47 2.13
Len 100 1.62 0.49 0.33 0.76
_sO_ Len 500 1.29 0.43 0.35 0.64
Wiki Len 1000 1.17 0.42 0.35 0.60
Unpopular 2.86 0.67 0.39 1.22
Len 100 0.02 0.01 0.01 0.01
_rQ_ Len 500 0.03 0.01 0.01 0.02
Len 1000 0.04 0.01 0.01 0.02
Unpopular 0.02 0.01 0.01 0.01
Len 100 0.03 0.02 0.01 0.02
_rK_ Len 500 0.02 0.01 0.01 0.01
Len 1000 0.02 0.01 0.01 0.01
Unpopular 0.03 0.02 0.01 0.02
Len 100 0.05 0.02 0.02 0.03
_rV_ Len 500 0.04 0.01 0.01 0.02
Len 1000 0.04 0.01 0.01 0.02
Unpopular 0.05 0.02 0.01 0.03
Len 100 0.03 0.01 0.01 0.02
_rO_ Len 500 0.02 0.01 0.01 0.01
Len 1000 0.02 0.00 0.00 0.01
Unpopular 0.03 0.01 0.01 0.01
##### Table 142: Statistical results for Wiki using Llama-2- 7b-hf on irrelevant responses.
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 1.54 1.12 1.87 1.54 sQ Simplified 0.62 0.37 0.23 0.41 Detailed 0.34 0.15 0.11 0.20 None 2.14 1.39 3.62 2.42 sK Simplified 0.57 0.36 0.30 0.40 Detailed 0.34 0.16 0.11 0.19 None 18.53 3.19 3.71 7.75 sV Simplified 3.81 0.96 0.65 1.63 Detailed 1.53 0.50 0.33 0.72 None 6.50 2.43 3.36 4.02 CREAK sO Simplified 2.01 0.61 0.54 0.99 Detailed 0.91 0.43 0.32 0.53 None 0.05 0.03 0.12 0.07 rQ Simplified 0.03 0.01 0.01 0.01 Detailed 0.03 0.01 0.01 0.01 None 0.03 0.02 0.08 0.04 rK Simplified 0.03 0.01 0.01 0.02 Detailed 0.02 0.01 0.01 0.02 None 0.07 0.02 0.05 0.04 rV Simplified 0.05 0.01 0.02 0.03 Detailed 0.04 0.01 0.01 0.02 None 0.04 0.02 0.07 0.04 rO Simplified 0.03 0.01 0.02 0.02 Detailed 0.02 0.01 0.01 0.01|sQ|None Simplified Detailed|1.54 1.12 1.87 1.54 0.62 0.37 0.23 0.41 0.34 0.15 0.11 0.20|
||sK|None Simplified Detailed|2.14 1.39 3.62 2.42 0.57 0.36 0.30 0.40 0.34 0.16 0.11 0.19|
||sV|None Simplified Detailed|18.53 3.19 3.71 7.75 3.81 0.96 0.65 1.63 1.53 0.50 0.33 0.72|
||sO|None Simplified Detailed|6.50 2.43 3.36 4.02 2.01 0.61 0.54 0.99 0.91 0.43 0.32 0.53|
||rQ|None Simplified Detailed|0.05 0.03 0.12 0.07 0.03 0.01 0.01 0.01 0.03 0.01 0.01 0.01|
||rK|None Simplified Detailed|0.03 0.02 0.08 0.04 0.03 0.01 0.01 0.02 0.02 0.01 0.01 0.02|
||rV|None Simplified Detailed|0.07 0.02 0.05 0.04 0.05 0.01 0.02 0.03 0.04 0.01 0.01 0.02|
||rO|None Simplified Detailed|0.04 0.02 0.07 0.04 0.03 0.01 0.02 0.02 0.02 0.01 0.01 0.01|
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|Len 100 0.51 0.22 0.23 0.31 sQ Len 500 0.31 0.14 0.18 0.21 Len 1000 0.29 0.12 0.18 0.19 Unpopular 0.54 0.25 0.25 0.35 Len 100 0.50 0.22 0.26 0.31 sK Len 500 0.30 0.12 0.20 0.20 Len 1000 0.28 0.11 0.20 0.19 Unpopular 0.53 0.26 0.27 0.35 Len 100 3.20 0.64 0.37 1.26 sV Len 500 2.01 0.47 0.37 0.86 Len 1000 1.72 0.45 0.37 0.77 Unpopular 5.84 0.80 0.47 2.13 Len 100 1.62 0.49 0.33 0.76 sO Len 500 1.29 0.43 0.35 0.64 Wiki Len 1000 1.17 0.42 0.35 0.60 Unpopular 2.86 0.67 0.39 1.22 Len 100 0.02 0.01 0.01 0.01 rQ Len 500 0.03 0.01 0.01 0.02 Len 1000 0.04 0.01 0.01 0.02 Unpopular 0.02 0.01 0.01 0.01 Len 100 0.03 0.02 0.01 0.02 rK Len 500 0.02 0.01 0.01 0.01 Len 1000 0.02 0.01 0.01 0.01 Unpopular 0.03 0.02 0.01 0.02 Len 100 0.05 0.02 0.02 0.03 rV Len 500 0.04 0.01 0.01 0.02 Len 1000 0.04 0.01 0.01 0.02 Unpopular 0.05 0.02 0.01 0.03 Len 100 0.03 0.01 0.01 0.02 rO Len 500 0.02 0.01 0.01 0.01 Len 1000 0.02 0.00 0.00 0.01 Unpopular 0.03 0.01 0.01 0.01|sQ|Len 100 Len 500 Len 1000 Unpopular|0.51 0.22 0.23 0.31 0.31 0.14 0.18 0.21 0.29 0.12 0.18 0.19 0.54 0.25 0.25 0.35|
||sK|Len 100 Len 500 Len 1000 Unpopular|0.50 0.22 0.26 0.31 0.30 0.12 0.20 0.20 0.28 0.11 0.20 0.19 0.53 0.26 0.27 0.35|
||sV|Len 100 Len 500 Len 1000 Unpopular|3.20 0.64 0.37 1.26 2.01 0.47 0.37 0.86 1.72 0.45 0.37 0.77 5.84 0.80 0.47 2.13|
||sO|Len 100 Len 500 Len 1000 Unpopular|1.62 0.49 0.33 0.76 1.29 0.43 0.35 0.64 1.17 0.42 0.35 0.60 2.86 0.67 0.39 1.22|
||rQ|Len 100 Len 500 Len 1000 Unpopular|0.02 0.01 0.01 0.01 0.03 0.01 0.01 0.02 0.04 0.01 0.01 0.02 0.02 0.01 0.01 0.01|
||rK|Len 100 Len 500 Len 1000 Unpopular|0.03 0.02 0.01 0.02 0.02 0.01 0.01 0.01 0.02 0.01 0.01 0.01 0.03 0.02 0.01 0.02|
||rV|Len 100 Len 500 Len 1000 Unpopular|0.05 0.02 0.02 0.03 0.04 0.01 0.01 0.02 0.04 0.01 0.01 0.02 0.05 0.02 0.01 0.03|
||rO|Len 100 Len 500 Len 1000 Unpopular|0.03 0.01 0.01 0.02 0.02 0.01 0.01 0.01 0.02 0.00 0.00 0.01 0.03 0.01 0.01 0.01|
-----
#### E.3 Instructed LLM on Correct Responses
E.3.1 Reasoning Tasks
The visualizations and statistical results on MATH tasks: MATH-Algebra (Figure 155, Table 143), MATH-Counting (Figure 156, Table 144), MATH- Geometry (Figure 157, Table 145).
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
Simplified 0.85 0.74 0.53 0.70
_sQ_ Detailed 0.45 0.38 0.34 0.38
Simplified 0.83 0.78 0.63 0.73
_sK_ Detailed 0.43 0.40 0.38 0.40
Simplified 4.11 1.93 0.60 2.10
_sV_ Detailed 1.98 1.04 0.33 1.07
Algebra Simplified 2.66 1.84 0.53 1.56
_sO_ Detailed 1.41 1.08 0.30 0.87
Simplified 0.03 0.02 0.04 0.03
_rQ_ Detailed 0.04 0.02 0.04 0.03
Simplified 0.02 0.02 0.04 0.03
_rK_ Detailed 0.02 0.02 0.05 0.03
Simplified 0.05 0.01 0.03 0.03
_rV_ Detailed 0.05 0.01 0.03 0.03
Simplified 0.02 0.01 0.02 0.01
_rO_ Detailed 0.02 0.00 0.02 0.01
##### Table 143: Statistical results for MATH-Algebra using Llama-2-7b-chat-hf on correct responses.
Figure 155: Visualization for MATH-Algebra using Llama-2-7b-chat-hf on correct responses.
#### The visualizations and statistical results on other reasoning tasks: AQuA (Figure 158, Table 146), GSM8K (Figure 159, Table 147), StrategyQA (Fig- ure 160, Table 148), ECQA (Figure 161, Table
149), CREAK (Figure 162, Table 150), Sensemak- ing (Figure 163, Table 151).
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|Simplified 0.80 0.65 0.44 0.62 sQ Detailed 0.47 0.39 0.36 0.40 Simplified 0.78 0.72 0.53 0.65 sK Detailed 0.45 0.40 0.43 0.42 Simplified 4.05 1.82 0.65 2.05 sV Detailed 2.17 1.03 0.44 1.16 Counting Simplified 2.80 1.83 0.51 1.58 sO Detailed 1.62 1.13 0.33 0.95 Simplified 0.03 0.02 0.04 0.03 rQ Detailed 0.04 0.02 0.04 0.03 Simplified 0.02 0.02 0.04 0.03 rK Detailed 0.02 0.02 0.05 0.03 Simplified 0.05 0.01 0.03 0.03 rV Detailed 0.05 0.01 0.03 0.03 Simplified 0.02 0.01 0.01 0.01 rO Detailed 0.02 0.00 0.01 0.01|sQ|Simplified Detailed|0.80 0.65 0.44 0.62 0.47 0.39 0.36 0.40|
||sK|Simplified Detailed|0.78 0.72 0.53 0.65 0.45 0.40 0.43 0.42|
||sV|Simplified Detailed|4.05 1.82 0.65 2.05 2.17 1.03 0.44 1.16|
||sO|Simplified Detailed|2.80 1.83 0.51 1.58 1.62 1.13 0.33 0.95|
||rQ|Simplified Detailed|0.03 0.02 0.04 0.03 0.04 0.02 0.04 0.03|
||rK|Simplified Detailed|0.02 0.02 0.04 0.03 0.02 0.02 0.05 0.03|
||rV|Simplified Detailed|0.05 0.01 0.03 0.03 0.05 0.01 0.03 0.03|
||rO|Simplified Detailed|0.02 0.01 0.01 0.01 0.02 0.00 0.01 0.01|
##### Table 144: Statistical results for MATH-Counting using Llama-2-7b-chat-hf on correct responses.
Figure 156: Visualization for MATH-Counting using Llama-2-7b-chat-hf on correct responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
Simplified 0.71 0.57 0.52 0.60
_sQ_ Detailed 0.48 0.38 0.39 0.41
Simplified 0.69 0.65 0.62 0.65
_sK_ Detailed 0.47 0.43 0.43 0.44
Simplified 3.51 1.58 0.59 1.78
_sV_ Detailed 2.23 1.08 0.43 1.18
Geometry Simplified 2.39 1.58 0.51 1.39
_sO_ Detailed 1.62 1.11 0.35 0.95
Simplified 0.04 0.02 0.04 0.03
_rQ_ Detailed 0.04 0.02 0.05 0.03
Simplified 0.02 0.02 0.04 0.03
_rK_ Detailed 0.02 0.02 0.05 0.03
Simplified 0.04 0.01 0.03 0.03
_rV_ Detailed 0.04 0.01 0.03 0.03
Simplified 0.02 0.00 0.01 0.01
_rO_ Detailed 0.02 0.00 0.02 0.01
##### Table 145: Statistical results for MATH-Geometry using Llama-2-7b-chat-hf on correct responses.
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|Simplified 0.85 0.74 0.53 0.70 sQ Detailed 0.45 0.38 0.34 0.38 Simplified 0.83 0.78 0.63 0.73 sK Detailed 0.43 0.40 0.38 0.40 Simplified 4.11 1.93 0.60 2.10 sV Detailed 1.98 1.04 0.33 1.07 Algebra Simplified 2.66 1.84 0.53 1.56 sO Detailed 1.41 1.08 0.30 0.87 Simplified 0.03 0.02 0.04 0.03 rQ Detailed 0.04 0.02 0.04 0.03 Simplified 0.02 0.02 0.04 0.03 rK Detailed 0.02 0.02 0.05 0.03 Simplified 0.05 0.01 0.03 0.03 rV Detailed 0.05 0.01 0.03 0.03 Simplified 0.02 0.01 0.02 0.01 rO Detailed 0.02 0.00 0.02 0.01|sQ|Simplified Detailed|0.85 0.74 0.53 0.70 0.45 0.38 0.34 0.38|
||sK|Simplified Detailed|0.83 0.78 0.63 0.73 0.43 0.40 0.38 0.40|
||sV|Simplified Detailed|4.11 1.93 0.60 2.10 1.98 1.04 0.33 1.07|
||sO|Simplified Detailed|2.66 1.84 0.53 1.56 1.41 1.08 0.30 0.87|
||rQ|Simplified Detailed|0.03 0.02 0.04 0.03 0.04 0.02 0.04 0.03|
||rK|Simplified Detailed|0.02 0.02 0.04 0.03 0.02 0.02 0.05 0.03|
||rV|Simplified Detailed|0.05 0.01 0.03 0.03 0.05 0.01 0.03 0.03|
||rO|Simplified Detailed|0.02 0.01 0.02 0.01 0.02 0.00 0.02 0.01|
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|Simplified 0.71 0.57 0.52 0.60 sQ Detailed 0.48 0.38 0.39 0.41 Simplified 0.69 0.65 0.62 0.65 sK Detailed 0.47 0.43 0.43 0.44 Simplified 3.51 1.58 0.59 1.78 sV Detailed 2.23 1.08 0.43 1.18 Geometry Simplified 2.39 1.58 0.51 1.39 sO Detailed 1.62 1.11 0.35 0.95 Simplified 0.04 0.02 0.04 0.03 rQ Detailed 0.04 0.02 0.05 0.03 Simplified 0.02 0.02 0.04 0.03 rK Detailed 0.02 0.02 0.05 0.03 Simplified 0.04 0.01 0.03 0.03 rV Detailed 0.04 0.01 0.03 0.03 Simplified 0.02 0.00 0.01 0.01 rO Detailed 0.02 0.00 0.02 0.01|sQ|Simplified Detailed|0.71 0.57 0.52 0.60 0.48 0.38 0.39 0.41|
||sK|Simplified Detailed|0.69 0.65 0.62 0.65 0.47 0.43 0.43 0.44|
||sV|Simplified Detailed|3.51 1.58 0.59 1.78 2.23 1.08 0.43 1.18|
||sO|Simplified Detailed|2.39 1.58 0.51 1.39 1.62 1.11 0.35 0.95|
||rQ|Simplified Detailed|0.04 0.02 0.04 0.03 0.04 0.02 0.05 0.03|
||rK|Simplified Detailed|0.02 0.02 0.04 0.03 0.02 0.02 0.05 0.03|
||rV|Simplified Detailed|0.04 0.01 0.03 0.03 0.04 0.01 0.03 0.03|
||rO|Simplified Detailed|0.02 0.00 0.01 0.01 0.02 0.00 0.02 0.01|
-----
##### Figure 157: Visualization for MATH-Geometry using Llama-2-7b-chat-hf on correct responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
None 9.84 5.39 4.01 6.13
_sQ_ Simplified 1.38 1.00 0.63 0.98
Detailed 0.47 0.39 0.40 0.41
None 11.77 7.22 7.84 8.70
_sK_ Simplified 1.37 1.17 0.77 1.07
Detailed 0.44 0.38 0.43 0.41
None 82.67 13.02 6.96 31.47
_sV_ Simplified 7.31 2.77 0.98 3.42
Detailed 2.17 1.00 0.45 1.15
None 32.51 8.05 4.15 14.13
AQuA _sO_ Simplified 4.50 2.62 0.72 2.42
Detailed 1.63 1.10 0.35 0.95
None 0.08 0.05 0.11 0.08
_rQ_ Simplified 0.03 0.02 0.04 0.03
Detailed 0.04 0.02 0.04 0.03
None 0.05 0.01 0.03 0.03
_rK_ Simplified 0.03 0.02 0.04 0.03
Detailed 0.02 0.02 0.05 0.03
None 0.09 0.03 0.04 0.05
_rV_ Simplified 0.06 0.01 0.03 0.03
Detailed 0.05 0.01 0.03 0.03
None 0.05 0.03 0.06 0.05
_rO_ Simplified 0.02 0.01 0.02 0.02
Detailed 0.02 0.01 0.01 0.01
##### Table 146: Statistical results for AQuA using Llama-2- 7b-chat-hf on correct responses.
#### E.3.2 Wiki Tasks
The visualizations and statistical results on Wiki tasks are shown in Figure 164 and Table 152.
E.4 Instructed LLM on Irrelevant Responses
E.4.1 Reasoning Tasks
The visualizations and statistical results on MATH tasks: MATH-Algebra (Figure 165, Table 153), MATH-Counting (Figure 166, Table 154), MATH- Geometry (Figure 167, Table 155). The visualizations and statistical results on other
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 4.79 3.74 2.69 3.51 sQ Simplified 0.88 0.71 0.42 0.65 Detailed 0.45 0.38 0.39 0.40 None 5.31 5.10 3.77 4.42 sK Simplified 0.86 0.85 0.53 0.72 Detailed 0.41 0.36 0.43 0.40 None 31.34 11.51 7.86 15.28 sV Simplified 4.46 1.95 0.67 2.20 Detailed 2.18 0.94 0.48 1.14 None 19.29 9.66 3.91 10.13 GSM8K sO Simplified 2.95 1.79 0.39 1.58 Detailed 1.65 1.03 0.32 0.93 None 0.03 0.04 0.07 0.04 rQ Simplified 0.03 0.02 0.04 0.03 Detailed 0.04 0.02 0.04 0.03 None 0.03 0.03 0.06 0.04 rK Simplified 0.03 0.02 0.04 0.03 Detailed 0.02 0.02 0.05 0.03 None 0.07 0.03 0.05 0.04 rV Simplified 0.06 0.01 0.03 0.03 Detailed 0.05 0.01 0.03 0.03 None 0.03 0.03 0.08 0.05 rO Simplified 0.02 0.01 0.02 0.02 Detailed 0.02 0.01 0.01 0.01|sQ|None Simplified Detailed|4.79 3.74 2.69 3.51 0.88 0.71 0.42 0.65 0.45 0.38 0.39 0.40|
||sK|None Simplified Detailed|5.31 5.10 3.77 4.42 0.86 0.85 0.53 0.72 0.41 0.36 0.43 0.40|
||sV|None Simplified Detailed|31.34 11.51 7.86 15.28 4.46 1.95 0.67 2.20 2.18 0.94 0.48 1.14|
||sO|None Simplified Detailed|19.29 9.66 3.91 10.13 2.95 1.79 0.39 1.58 1.65 1.03 0.32 0.93|
||rQ|None Simplified Detailed|0.03 0.04 0.07 0.04 0.03 0.02 0.04 0.03 0.04 0.02 0.04 0.03|
||rK|None Simplified Detailed|0.03 0.03 0.06 0.04 0.03 0.02 0.04 0.03 0.02 0.02 0.05 0.03|
||rV|None Simplified Detailed|0.07 0.03 0.05 0.04 0.06 0.01 0.03 0.03 0.05 0.01 0.03 0.03|
||rO|None Simplified Detailed|0.03 0.03 0.08 0.05 0.02 0.01 0.02 0.02 0.02 0.01 0.01 0.01|
##### Table 147: Statistical results for GSM8K using Llama- 2-7b-chat-hf on correct responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
None 6.24 5.38 6.40 5.71
_sQ_ Simplified 0.91 0.56 0.68 0.70
Detailed 0.43 0.29 0.41 0.37
None 6.17 7.25 8.80 7.08
_sK_ Simplified 0.87 0.71 0.88 0.79
Detailed 0.40 0.30 0.53 0.41
None 47.27 17.66 17.60 25.66
_sV_ Simplified 5.43 1.60 1.03 2.49
Detailed 2.10 0.79 0.56 1.10
None 28.77 14.81 9.00 16.46
StrategyQA _sO_ Simplified 3.46 1.38 0.57 1.67
Detailed 1.58 0.90 0.40 0.89
None 0.03 0.05 0.09 0.06
_rQ_ Simplified 0.03 0.02 0.02 0.02
Detailed 0.04 0.02 0.04 0.03
None 0.03 0.01 0.04 0.03
_rK_ Simplified 0.03 0.02 0.03 0.03
Detailed 0.02 0.02 0.05 0.03
None 0.07 0.04 0.08 0.06
_rV_ Simplified 0.05 0.02 0.04 0.03
Detailed 0.04 0.01 0.04 0.03
None 0.04 0.05 0.10 0.06
_rO_ Simplified 0.03 0.01 0.01 0.02
Detailed 0.02 0.00 0.01 0.01
##### Table 148: Statistical results for StrategyQA using Llama-2-7b-chat-hf on correct responses.
#### reasoning tasks: AQuA (Figure 168, Table 156), GSM8K (Figure 169, Table 157), StrategyQA (Fig- ure 170, Table 158), ECQA (Figure 171, Table
159), CREAK (Figure 172, Table 160), Sensemak- ing (Figure 173, Table 161).
E.4.2 Wiki Tasks The visualizations and statistical results on Wiki tasks are shown in Figure 174 and Table 162.
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 9.84 5.39 4.01 6.13 sQ Simplified 1.38 1.00 0.63 0.98 Detailed 0.47 0.39 0.40 0.41 None 11.77 7.22 7.84 8.70 sK Simplified 1.37 1.17 0.77 1.07 Detailed 0.44 0.38 0.43 0.41 None 82.67 13.02 6.96 31.47 sV Simplified 7.31 2.77 0.98 3.42 Detailed 2.17 1.00 0.45 1.15 None 32.51 8.05 4.15 14.13 AQuA sO Simplified 4.50 2.62 0.72 2.42 Detailed 1.63 1.10 0.35 0.95 None 0.08 0.05 0.11 0.08 rQ Simplified 0.03 0.02 0.04 0.03 Detailed 0.04 0.02 0.04 0.03 None 0.05 0.01 0.03 0.03 rK Simplified 0.03 0.02 0.04 0.03 Detailed 0.02 0.02 0.05 0.03 None 0.09 0.03 0.04 0.05 rV Simplified 0.06 0.01 0.03 0.03 Detailed 0.05 0.01 0.03 0.03 None 0.05 0.03 0.06 0.05 rO Simplified 0.02 0.01 0.02 0.02 Detailed 0.02 0.01 0.01 0.01||||
||sK|None Simplified Detailed|11.77 7.22 7.84 8.70 1.37 1.17 0.77 1.07 0.44 0.38 0.43 0.41|
||sV|None Simplified Detailed|82.67 13.02 6.96 31.47 7.31 2.77 0.98 3.42 2.17 1.00 0.45 1.15|
||sO|None Simplified Detailed|32.51 8.05 4.15 14.13 4.50 2.62 0.72 2.42 1.63 1.10 0.35 0.95|
||rQ|None Simplified Detailed|0.08 0.05 0.11 0.08 0.03 0.02 0.04 0.03 0.04 0.02 0.04 0.03|
||rK|None Simplified Detailed|0.05 0.01 0.03 0.03 0.03 0.02 0.04 0.03 0.02 0.02 0.05 0.03|
||rV|None Simplified Detailed|0.09 0.03 0.04 0.05 0.06 0.01 0.03 0.03 0.05 0.01 0.03 0.03|
||rO|None Simplified Detailed|0.05 0.03 0.06 0.05 0.02 0.01 0.02 0.02 0.02 0.01 0.01 0.01|
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 6.24 5.38 6.40 5.71 sQ Simplified 0.91 0.56 0.68 0.70 Detailed 0.43 0.29 0.41 0.37 None 6.17 7.25 8.80 7.08 sK Simplified 0.87 0.71 0.88 0.79 Detailed 0.40 0.30 0.53 0.41 None 47.27 17.66 17.60 25.66 sV Simplified 5.43 1.60 1.03 2.49 Detailed 2.10 0.79 0.56 1.10 None 28.77 14.81 9.00 16.46 StrategyQA sO Simplified 3.46 1.38 0.57 1.67 Detailed 1.58 0.90 0.40 0.89 None 0.03 0.05 0.09 0.06 rQ Simplified 0.03 0.02 0.02 0.02 Detailed 0.04 0.02 0.04 0.03 None 0.03 0.01 0.04 0.03 rK Simplified 0.03 0.02 0.03 0.03 Detailed 0.02 0.02 0.05 0.03 None 0.07 0.04 0.08 0.06 rV Simplified 0.05 0.02 0.04 0.03 Detailed 0.04 0.01 0.04 0.03 None 0.04 0.05 0.10 0.06 rO Simplified 0.03 0.01 0.01 0.02 Detailed 0.02 0.00 0.01 0.01|sQ|None Simplified Detailed|6.24 5.38 6.40 5.71 0.91 0.56 0.68 0.70 0.43 0.29 0.41 0.37|
||sK|None Simplified Detailed|6.17 7.25 8.80 7.08 0.87 0.71 0.88 0.79 0.40 0.30 0.53 0.41|
||sV|None Simplified Detailed|47.27 17.66 17.60 25.66 5.43 1.60 1.03 2.49 2.10 0.79 0.56 1.10|
||sO|None Simplified Detailed|28.77 14.81 9.00 16.46 3.46 1.38 0.57 1.67 1.58 0.90 0.40 0.89|
||rQ|None Simplified Detailed|0.03 0.05 0.09 0.06 0.03 0.02 0.02 0.02 0.04 0.02 0.04 0.03|
||rK|None Simplified Detailed|0.03 0.01 0.04 0.03 0.03 0.02 0.03 0.03 0.02 0.02 0.05 0.03|
||rV|None Simplified Detailed|0.07 0.04 0.08 0.06 0.05 0.02 0.04 0.03 0.04 0.01 0.04 0.03|
||rO|None Simplified Detailed|0.04 0.05 0.10 0.06 0.03 0.01 0.01 0.02 0.02 0.00 0.01 0.01|
-----
##### Figure 158: Visualization for AQuA using Llama-2-7b-chat-hf on correct responses.
Figure 159: Visualization for GSM8K using Llama-2-7b-chat-hf on correct responses.
-----
##### Figure 160: Visualization for StrategyQA using Llama-2-7b-chat-hf on correct responses.
Figure 161: Visualization for ECQA using Llama-2-7b-chat-hf on correct responses.
-----
##### Figure 162: Visualization for CREAK using Llama-2-7b-chat-hf on correct responses.
Figure 163: Visualization for Sensemaking using Llama-2-7b-chat-hf on correct responses.
-----
##### Figure 164: Visualization for Wiki tasks using Llama-2-7b-chat-hf on correct responses.
-----
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 3.53 2.69 3.06 3.07 sQ Simplified 1.85 1.45 1.51 1.58 Detailed 0.47 0.34 0.38 0.39 None 3.85 3.34 4.61 3.93 sK Simplified 1.94 1.81 2.14 1.93 Detailed 0.45 0.35 0.51 0.43 None 23.30 7.57 6.10 11.41 sV Simplified 10.36 3.63 2.60 5.05 Detailed 2.17 0.92 0.58 1.17 None 14.40 6.56 3.24 7.64 Sensemaking sO Simplified 6.65 3.25 1.42 3.51 Detailed 1.71 1.01 0.38 0.96 None 0.03 0.03 0.08 0.05 rQ Simplified 0.03 0.02 0.03 0.03 Detailed 0.04 0.02 0.04 0.03 None 0.04 0.01 0.06 0.04 rK Simplified 0.03 0.03 0.02 0.03 Detailed 0.02 0.02 0.05 0.03 None 0.05 0.03 0.06 0.05 rV Simplified 0.05 0.02 0.05 0.04 Detailed 0.04 0.01 0.04 0.03 None 0.03 0.03 0.09 0.05 rO Simplified 0.03 0.02 0.03 0.02 Detailed 0.02 0.01 0.01 0.01|sQ|None Simplified Detailed|3.53 2.69 3.06 3.07 1.85 1.45 1.51 1.58 0.47 0.34 0.38 0.39|
||sK|None Simplified Detailed|3.85 3.34 4.61 3.93 1.94 1.81 2.14 1.93 0.45 0.35 0.51 0.43|
||sV|None Simplified Detailed|23.30 7.57 6.10 11.41 10.36 3.63 2.60 5.05 2.17 0.92 0.58 1.17|
||sO|None Simplified Detailed|14.40 6.56 3.24 7.64 6.65 3.25 1.42 3.51 1.71 1.01 0.38 0.96|
||rQ|None Simplified Detailed|0.03 0.03 0.08 0.05 0.03 0.02 0.03 0.03 0.04 0.02 0.04 0.03|
||rK|None Simplified Detailed|0.04 0.01 0.06 0.04 0.03 0.03 0.02 0.03 0.02 0.02 0.05 0.03|
||rV|None Simplified Detailed|0.05 0.03 0.06 0.05 0.05 0.02 0.05 0.04 0.04 0.01 0.04 0.03|
||rO|None Simplified Detailed|0.03 0.03 0.09 0.05 0.03 0.02 0.03 0.02 0.02 0.01 0.01 0.01|
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 3.62 2.46 3.58 3.18 sQ Simplified 1.41 0.86 0.83 1.01 Detailed 0.58 0.43 0.50 0.49 None 3.91 3.97 4.79 4.15 sK Simplified 1.42 1.04 1.24 1.19 Detailed 0.54 0.41 0.69 0.55 None 27.04 8.65 8.79 13.76 sV Simplified 8.06 2.94 1.71 3.93 Detailed 2.59 1.11 0.76 1.43 None 15.08 6.91 4.56 8.33 ECQA sO Simplified 5.14 2.53 1.03 2.67 Detailed 1.92 1.21 0.49 1.13 None 0.04 0.04 0.12 0.06 rQ Simplified 0.03 0.02 0.03 0.03 Detailed 0.04 0.02 0.05 0.03 None 0.03 0.01 0.08 0.04 rK Simplified 0.03 0.02 0.03 0.02 Detailed 0.02 0.03 0.06 0.04 None 0.07 0.03 0.09 0.06 rV Simplified 0.05 0.02 0.05 0.04 Detailed 0.04 0.01 0.04 0.03 None 0.04 0.05 0.11 0.07 rO Simplified 0.03 0.01 0.02 0.02 Detailed 0.02 0.01 0.01 0.01|sQ|None Simplified Detailed|3.62 2.46 3.58 3.18 1.41 0.86 0.83 1.01 0.58 0.43 0.50 0.49|
||sK|None Simplified Detailed|3.91 3.97 4.79 4.15 1.42 1.04 1.24 1.19 0.54 0.41 0.69 0.55|
||sV|None Simplified Detailed|27.04 8.65 8.79 13.76 8.06 2.94 1.71 3.93 2.59 1.11 0.76 1.43|
||sO|None Simplified Detailed|15.08 6.91 4.56 8.33 5.14 2.53 1.03 2.67 1.92 1.21 0.49 1.13|
||rQ|None Simplified Detailed|0.04 0.04 0.12 0.06 0.03 0.02 0.03 0.03 0.04 0.02 0.05 0.03|
||rK|None Simplified Detailed|0.03 0.01 0.08 0.04 0.03 0.02 0.03 0.02 0.02 0.03 0.06 0.04|
||rV|None Simplified Detailed|0.07 0.03 0.09 0.06 0.05 0.02 0.05 0.04 0.04 0.01 0.04 0.03|
||rO|None Simplified Detailed|0.04 0.05 0.11 0.07 0.03 0.01 0.02 0.02 0.02 0.01 0.01 0.01|
##### Table 149: Statistical results for ECQA using Llama-2- 7b-chat-hf on correct responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
None 7.65 5.59 6.68 6.42
_sQ_ Simplified 1.53 1.14 1.05 1.21
Detailed 0.53 0.38 0.44 0.44
None 8.34 7.09 9.21 8.02
_sK_ Simplified 1.57 1.45 1.53 1.48
Detailed 0.50 0.40 0.57 0.48
None 55.83 19.94 16.12 28.78
_sV_ Simplified 9.31 3.19 2.07 4.45
Detailed 2.58 0.98 0.63 1.33
None 33.10 15.36 8.03 18.01
CREAK _sO_ Simplified 5.72 2.62 1.18 2.97
Detailed 1.86 1.02 0.42 1.03
None 0.03 0.05 0.11 0.07
_rQ_ Simplified 0.03 0.03 0.03 0.03
Detailed 0.04 0.02 0.04 0.03
None 0.03 0.02 0.07 0.04
_rK_ Simplified 0.03 0.03 0.03 0.03
Detailed 0.02 0.02 0.05 0.03
None 0.06 0.04 0.07 0.05
_rV_ Simplified 0.05 0.02 0.05 0.04
Detailed 0.04 0.01 0.04 0.03
None 0.04 0.05 0.12 0.07
_rO_ Simplified 0.03 0.01 0.02 0.02
Detailed 0.02 0.00 0.01 0.01
##### Table 150: Statistical results for CREAK using Llama- 2-7b-chat-hf on correct responses.
##### Table 151: Statistical results for Sensemaking using Llama-2-7b-chat-hf on correct responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
Len 100 0.67 0.28 0.58 0.50
_sQ_ Len 500 0.48 0.23 0.43 0.38
Len 1000 0.41 0.20 0.36 0.32
Unpopular 0.96 0.78 0.77 0.83
Len 100 0.65 0.35 0.67 0.54
_sK_ Len 500 0.45 0.23 0.51 0.40
Len 1000 0.38 0.20 0.43 0.34
Unpopular 0.97 0.92 0.87 0.90
Len 100 3.75 0.81 0.47 1.56
_sV_ Len 500 2.33 0.73 0.40 1.09
Len 1000 1.85 0.65 0.38 0.91
Unpopular 7.65 1.50 0.66 2.95
Len 100 2.29 0.87 0.43 1.12
_sO_ Len 500 1.64 0.76 0.38 0.88
Wiki Len 1000 1.37 0.70 0.35 0.77
Unpopular 3.88 1.49 0.61 1.93
Len 100 0.03 0.01 0.02 0.02
_rQ_ Len 500 0.03 0.01 0.02 0.02
Len 1000 0.04 0.01 0.03 0.02
Unpopular 0.03 0.03 0.02 0.02
Len 100 0.03 0.01 0.02 0.02
_rK_ Len 500 0.02 0.01 0.02 0.02
Len 1000 0.02 0.02 0.03 0.02
Unpopular 0.04 0.02 0.02 0.02
Len 100 0.05 0.02 0.02 0.03
_rV_ Len 500 0.04 0.01 0.02 0.02
Len 1000 0.04 0.01 0.02 0.02
Unpopular 0.05 0.02 0.02 0.03
Len 100 0.02 0.01 0.02 0.02
_rO_ Len 500 0.02 0.01 0.01 0.01
Len 1000 0.01 0.01 0.01 0.01
Unpopular 0.02 0.02 0.01 0.02
##### Table 152: Statistical results for Wiki using Llama-2- 7b-chat-hf on correct responses.
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 7.65 5.59 6.68 6.42 sQ Simplified 1.53 1.14 1.05 1.21 Detailed 0.53 0.38 0.44 0.44 None 8.34 7.09 9.21 8.02 sK Simplified 1.57 1.45 1.53 1.48 Detailed 0.50 0.40 0.57 0.48 None 55.83 19.94 16.12 28.78 sV Simplified 9.31 3.19 2.07 4.45 Detailed 2.58 0.98 0.63 1.33 None 33.10 15.36 8.03 18.01 CREAK sO Simplified 5.72 2.62 1.18 2.97 Detailed 1.86 1.02 0.42 1.03 None 0.03 0.05 0.11 0.07 rQ Simplified 0.03 0.03 0.03 0.03 Detailed 0.04 0.02 0.04 0.03 None 0.03 0.02 0.07 0.04 rK Simplified 0.03 0.03 0.03 0.03 Detailed 0.02 0.02 0.05 0.03 None 0.06 0.04 0.07 0.05 rV Simplified 0.05 0.02 0.05 0.04 Detailed 0.04 0.01 0.04 0.03 None 0.04 0.05 0.12 0.07 rO Simplified 0.03 0.01 0.02 0.02 Detailed 0.02 0.00 0.01 0.01|sQ|None Simplified Detailed|7.65 5.59 6.68 6.42 1.53 1.14 1.05 1.21 0.53 0.38 0.44 0.44|
||sK|None Simplified Detailed|8.34 7.09 9.21 8.02 1.57 1.45 1.53 1.48 0.50 0.40 0.57 0.48|
||sV|None Simplified Detailed|55.83 19.94 16.12 28.78 9.31 3.19 2.07 4.45 2.58 0.98 0.63 1.33|
||sO|None Simplified Detailed|33.10 15.36 8.03 18.01 5.72 2.62 1.18 2.97 1.86 1.02 0.42 1.03|
||rQ|None Simplified Detailed|0.03 0.05 0.11 0.07 0.03 0.03 0.03 0.03 0.04 0.02 0.04 0.03|
||rK|None Simplified Detailed|0.03 0.02 0.07 0.04 0.03 0.03 0.03 0.03 0.02 0.02 0.05 0.03|
||rV|None Simplified Detailed|0.06 0.04 0.07 0.05 0.05 0.02 0.05 0.04 0.04 0.01 0.04 0.03|
||rO|None Simplified Detailed|0.04 0.05 0.12 0.07 0.03 0.01 0.02 0.02 0.02 0.00 0.01 0.01|
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|Len 100 0.67 0.28 0.58 0.50 sQ Len 500 0.48 0.23 0.43 0.38 Len 1000 0.41 0.20 0.36 0.32 Unpopular 0.96 0.78 0.77 0.83 Len 100 0.65 0.35 0.67 0.54 sK Len 500 0.45 0.23 0.51 0.40 Len 1000 0.38 0.20 0.43 0.34 Unpopular 0.97 0.92 0.87 0.90 Len 100 3.75 0.81 0.47 1.56 sV Len 500 2.33 0.73 0.40 1.09 Len 1000 1.85 0.65 0.38 0.91 Unpopular 7.65 1.50 0.66 2.95 Len 100 2.29 0.87 0.43 1.12 sO Len 500 1.64 0.76 0.38 0.88 Wiki Len 1000 1.37 0.70 0.35 0.77 Unpopular 3.88 1.49 0.61 1.93 Len 100 0.03 0.01 0.02 0.02 rQ Len 500 0.03 0.01 0.02 0.02 Len 1000 0.04 0.01 0.03 0.02 Unpopular 0.03 0.03 0.02 0.02 Len 100 0.03 0.01 0.02 0.02 rK Len 500 0.02 0.01 0.02 0.02 Len 1000 0.02 0.02 0.03 0.02 Unpopular 0.04 0.02 0.02 0.02 Len 100 0.05 0.02 0.02 0.03 rV Len 500 0.04 0.01 0.02 0.02 Len 1000 0.04 0.01 0.02 0.02 Unpopular 0.05 0.02 0.02 0.03 Len 100 0.02 0.01 0.02 0.02 rO Len 500 0.02 0.01 0.01 0.01 Len 1000 0.01 0.01 0.01 0.01 Unpopular 0.02 0.02 0.01 0.02|sQ|Len 100 Len 500 Len 1000 Unpopular|0.67 0.28 0.58 0.50 0.48 0.23 0.43 0.38 0.41 0.20 0.36 0.32 0.96 0.78 0.77 0.83|
||sK|Len 100 Len 500 Len 1000 Unpopular|0.65 0.35 0.67 0.54 0.45 0.23 0.51 0.40 0.38 0.20 0.43 0.34 0.97 0.92 0.87 0.90|
||sV|Len 100 Len 500 Len 1000 Unpopular|3.75 0.81 0.47 1.56 2.33 0.73 0.40 1.09 1.85 0.65 0.38 0.91 7.65 1.50 0.66 2.95|
||sO|Len 100 Len 500 Len 1000 Unpopular|2.29 0.87 0.43 1.12 1.64 0.76 0.38 0.88 1.37 0.70 0.35 0.77 3.88 1.49 0.61 1.93|
||rQ|Len 100 Len 500 Len 1000 Unpopular|0.03 0.01 0.02 0.02 0.03 0.01 0.02 0.02 0.04 0.01 0.03 0.02 0.03 0.03 0.02 0.02|
||rK|Len 100 Len 500 Len 1000 Unpopular|0.03 0.01 0.02 0.02 0.02 0.01 0.02 0.02 0.02 0.02 0.03 0.02 0.04 0.02 0.02 0.02|
||rV|Len 100 Len 500 Len 1000 Unpopular|0.05 0.02 0.02 0.03 0.04 0.01 0.02 0.02 0.04 0.01 0.02 0.02 0.05 0.02 0.02 0.03|
||rO|Len 100 Len 500 Len 1000 Unpopular|0.02 0.01 0.02 0.02 0.02 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.02 0.02 0.01 0.02|
-----
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|Simplified 0.88 0.77 0.62 0.75 sQ Detailed 0.52 0.43 0.37 0.43 Simplified 0.86 0.91 0.78 0.83 sK Detailed 0.50 0.46 0.43 0.46 Simplified 4.54 1.94 0.72 2.26 sV Detailed 2.39 1.17 0.41 1.26 Algebra Simplified 2.73 1.88 0.59 1.62 sO Detailed 1.51 1.15 0.36 0.94 Simplified 0.03 0.03 0.04 0.03 rQ Detailed 0.03 0.02 0.04 0.03 Simplified 0.02 0.02 0.04 0.03 rK Detailed 0.02 0.02 0.05 0.03 Simplified 0.04 0.02 0.03 0.03 rV Detailed 0.04 0.01 0.03 0.03 Simplified 0.02 0.01 0.02 0.01 rO Detailed 0.02 0.00 0.02 0.01|sQ|Simplified Detailed|0.88 0.77 0.62 0.75 0.52 0.43 0.37 0.43|
||sK|Simplified Detailed|0.86 0.91 0.78 0.83 0.50 0.46 0.43 0.46|
||sV|Simplified Detailed|4.54 1.94 0.72 2.26 2.39 1.17 0.41 1.26|
||sO|Simplified Detailed|2.73 1.88 0.59 1.62 1.51 1.15 0.36 0.94|
||rQ|Simplified Detailed|0.03 0.03 0.04 0.03 0.03 0.02 0.04 0.03|
||rK|Simplified Detailed|0.02 0.02 0.04 0.03 0.02 0.02 0.05 0.03|
||rV|Simplified Detailed|0.04 0.02 0.03 0.03 0.04 0.01 0.03 0.03|
||rO|Simplified Detailed|0.02 0.01 0.02 0.01 0.02 0.00 0.02 0.01|
##### Table 153: Statistical results for MATH-Algebra using Llama-2-7b-chat-hf on irrelevant responses.
Figure 165: Visualization for MATH-Algebra using Llama-2-7b-chat-hf on irrelevant responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
Simplified 0.82 0.64 0.50 0.64
_sQ_ Detailed 0.51 0.37 0.34 0.40
Simplified 0.78 0.76 0.63 0.70
_sK_ Detailed 0.48 0.40 0.43 0.42
Simplified 4.36 1.82 0.79 2.17
_sV_ Detailed 2.40 1.08 0.48 1.24
Counting Simplified 2.79 1.77 0.61 1.60
_sO_ Detailed 1.64 1.10 0.39 0.97
Simplified 0.04 0.02 0.04 0.03
_rQ_ Detailed 0.03 0.02 0.04 0.03
Simplified 0.02 0.02 0.04 0.03
_rK_ Detailed 0.02 0.02 0.04 0.03
Simplified 0.05 0.02 0.03 0.03
_rV_ Detailed 0.04 0.01 0.02 0.02
Simplified 0.02 0.00 0.01 0.01
_rO_ Detailed 0.02 0.00 0.01 0.01
##### Table 154: Statistical results for MATH-Counting using Llama-2-7b-chat-hf on irrelevant responses.
##### Figure 166: Visualization for MATH-Counting using Llama-2-7b-chat-hf on irrelevant responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
Simplified 0.73 0.54 0.61 0.63
_sQ_ Detailed 0.54 0.33 0.41 0.42
Simplified 0.70 0.64 0.77 0.70
_sK_ Detailed 0.52 0.40 0.52 0.47
Simplified 3.80 1.55 0.69 1.88
_sV_ Detailed 2.48 1.06 0.55 1.26
Geometry Simplified 2.50 1.57 0.61 1.46
_sO_ Detailed 1.75 1.11 0.48 1.04
Simplified 0.04 0.02 0.04 0.03
_rQ_ Detailed 0.04 0.02 0.04 0.03
Simplified 0.02 0.02 0.04 0.03
_rK_ Detailed 0.02 0.02 0.04 0.03
Simplified 0.04 0.01 0.03 0.03
_rV_ Detailed 0.04 0.01 0.02 0.02
Simplified 0.02 0.01 0.01 0.01
_rO_ Detailed 0.02 0.00 0.01 0.01
##### Table 155: Statistical results for MATH-Geometry using Llama-2-7b-chat-hf on irrelevant responses.
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|Simplified 0.73 0.54 0.61 0.63 sQ Detailed 0.54 0.33 0.41 0.42 Simplified 0.70 0.64 0.77 0.70 sK Detailed 0.52 0.40 0.52 0.47 Simplified 3.80 1.55 0.69 1.88 sV Detailed 2.48 1.06 0.55 1.26 Geometry Simplified 2.50 1.57 0.61 1.46 sO Detailed 1.75 1.11 0.48 1.04 Simplified 0.04 0.02 0.04 0.03 rQ Detailed 0.04 0.02 0.04 0.03 Simplified 0.02 0.02 0.04 0.03 rK Detailed 0.02 0.02 0.04 0.03 Simplified 0.04 0.01 0.03 0.03 rV Detailed 0.04 0.01 0.02 0.02 Simplified 0.02 0.01 0.01 0.01 rO Detailed 0.02 0.00 0.01 0.01|sQ|Simplified Detailed|0.73 0.54 0.61 0.63 0.54 0.33 0.41 0.42|
||sK|Simplified Detailed|0.70 0.64 0.77 0.70 0.52 0.40 0.52 0.47|
||sO|Simplified Detailed|2.50 1.57 0.61 1.46 1.75 1.11 0.48 1.04|
||rQ|Simplified Detailed|0.04 0.02 0.04 0.03 0.04 0.02 0.04 0.03|
||rK|Simplified Detailed|0.02 0.02 0.04 0.03 0.02 0.02 0.04 0.03|
||rV|Simplified Detailed|0.04 0.01 0.03 0.03 0.04 0.01 0.02 0.02|
||rO|Simplified Detailed|0.02 0.01 0.01 0.01 0.02 0.00 0.01 0.01|
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|Simplified 0.82 0.64 0.50 0.64 sQ Detailed 0.51 0.37 0.34 0.40 Simplified 0.78 0.76 0.63 0.70 sK Detailed 0.48 0.40 0.43 0.42 Simplified 4.36 1.82 0.79 2.17 sV Detailed 2.40 1.08 0.48 1.24 Counting Simplified 2.79 1.77 0.61 1.60 sO Detailed 1.64 1.10 0.39 0.97 Simplified 0.04 0.02 0.04 0.03 rQ Detailed 0.03 0.02 0.04 0.03 Simplified 0.02 0.02 0.04 0.03 rK Detailed 0.02 0.02 0.04 0.03 Simplified 0.05 0.02 0.03 0.03 rV Detailed 0.04 0.01 0.02 0.02 Simplified 0.02 0.00 0.01 0.01 rO Detailed 0.02 0.00 0.01 0.01|sQ|Simplified Detailed|0.82 0.64 0.50 0.64 0.51 0.37 0.34 0.40|
||sK|Simplified Detailed|0.78 0.76 0.63 0.70 0.48 0.40 0.43 0.42|
||sV|Simplified Detailed|4.36 1.82 0.79 2.17 2.40 1.08 0.48 1.24|
||sO|Simplified Detailed|2.79 1.77 0.61 1.60 1.64 1.10 0.39 0.97|
||rQ|Simplified Detailed|0.04 0.02 0.04 0.03 0.03 0.02 0.04 0.03|
||rK|Simplified Detailed|0.02 0.02 0.04 0.03 0.02 0.02 0.04 0.03|
||rV|Simplified Detailed|0.05 0.02 0.03 0.03 0.04 0.01 0.02 0.02|
||rO|Simplified Detailed|0.02 0.00 0.01 0.01 0.02 0.00 0.01 0.01|
-----
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 4.90 3.83 2.67 3.56 sQ Simplified 1.02 0.86 0.53 0.78 Detailed 0.59 0.48 0.45 0.49 None 5.42 5.16 3.73 4.45 sK Simplified 0.97 1.11 0.69 0.88 Detailed 0.55 0.55 0.58 0.55 None 31.91 11.68 7.97 15.55 sV Simplified 5.40 1.99 1.00 2.58 Detailed 2.86 1.09 0.66 1.45 None 19.67 9.77 3.96 10.29 GSM8K sO Simplified 3.29 1.88 0.70 1.81 Detailed 2.05 1.13 0.47 1.13 None 0.03 0.03 0.06 0.04 rQ Simplified 0.03 0.02 0.03 0.03 Detailed 0.04 0.02 0.05 0.03 None 0.03 0.03 0.05 0.04 rK Simplified 0.02 0.02 0.03 0.03 Detailed 0.02 0.03 0.05 0.03 None 0.07 0.03 0.05 0.04 rV Simplified 0.05 0.01 0.03 0.03 Detailed 0.05 0.01 0.03 0.03 None 0.03 0.03 0.07 0.04 rO Simplified 0.02 0.01 0.02 0.01 Detailed 0.02 0.00 0.01 0.01|sQ|None Simplified Detailed|4.90 3.83 2.67 3.56 1.02 0.86 0.53 0.78 0.59 0.48 0.45 0.49|
||sK|None Simplified Detailed|5.42 5.16 3.73 4.45 0.97 1.11 0.69 0.88 0.55 0.55 0.58 0.55|
||sV|None Simplified Detailed|31.91 11.68 7.97 15.55 5.40 1.99 1.00 2.58 2.86 1.09 0.66 1.45|
||sO|None Simplified Detailed|19.67 9.77 3.96 10.29 3.29 1.88 0.70 1.81 2.05 1.13 0.47 1.13|
||rQ|None Simplified Detailed|0.03 0.03 0.06 0.04 0.03 0.02 0.03 0.03 0.04 0.02 0.05 0.03|
||rK|None Simplified Detailed|0.03 0.03 0.05 0.04 0.02 0.02 0.03 0.03 0.02 0.03 0.05 0.03|
||rV|None Simplified Detailed|0.07 0.03 0.05 0.04 0.05 0.01 0.03 0.03 0.05 0.01 0.03 0.03|
||rO|None Simplified Detailed|0.03 0.03 0.07 0.04 0.02 0.01 0.02 0.01 0.02 0.00 0.01 0.01|
##### Figure 167: Visualization for MATH-Geometry using Llama-2-7b-chat-hf on irrelevant responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
None 9.67 5.34 3.97 6.05
_sQ_ Simplified 1.46 1.10 0.62 1.03
Detailed 0.66 0.53 0.47 0.54
None 11.48 7.26 7.76 8.60
_sK_ Simplified 1.40 1.44 0.79 1.16
Detailed 0.62 0.61 0.54 0.57
None 81.42 12.90 6.94 31.05
_sV_ Simplified 7.57 2.92 1.19 3.57
Detailed 3.17 1.31 0.65 1.60
None 32.04 8.00 4.11 13.96
AQuA _sO_ Simplified 4.53 2.70 0.83 2.48
Detailed 2.11 1.30 0.51 1.22
None 0.07 0.05 0.11 0.08
_rQ_ Simplified 0.04 0.02 0.03 0.03
Detailed 0.04 0.02 0.05 0.03
None 0.05 0.01 0.03 0.03
_rK_ Simplified 0.03 0.02 0.04 0.03
Detailed 0.02 0.02 0.05 0.03
None 0.09 0.03 0.04 0.05
_rV_ Simplified 0.06 0.02 0.02 0.03
Detailed 0.05 0.01 0.03 0.03
None 0.05 0.03 0.06 0.05
_rO_ Simplified 0.02 0.01 0.02 0.02
Detailed 0.02 0.00 0.01 0.01
##### Table 156: Statistical results for AQuA using Llama-2- 7b-chat-hf on irrelevant responses.
##### Table 157: Statistical results for GSM8K using Llama- 2-7b-chat-hf on irrelevant responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
None 6.22 5.35 6.41 5.71
_sQ_ Simplified 1.05 0.59 0.69 0.76
Detailed 0.50 0.33 0.38 0.39
None 6.13 7.26 8.85 7.09
_sK_ Simplified 0.95 0.69 0.88 0.81
Detailed 0.47 0.37 0.52 0.45
None 46.97 17.60 17.59 25.55
_sV_ Simplified 6.61 1.55 1.37 2.90
Detailed 2.58 0.85 0.66 1.28
None 28.58 14.76 9.00 16.38
StrategyQA _sO_ Simplified 3.79 1.32 0.80 1.84
Detailed 1.88 0.80 0.51 0.99
None 0.03 0.05 0.09 0.06
_rQ_ Simplified 0.03 0.01 0.02 0.02
Detailed 0.04 0.01 0.03 0.03
None 0.03 0.01 0.04 0.03
_rK_ Simplified 0.03 0.01 0.03 0.02
Detailed 0.02 0.02 0.03 0.02
None 0.07 0.04 0.08 0.06
_rV_ Simplified 0.05 0.02 0.03 0.03
Detailed 0.04 0.01 0.03 0.03
None 0.04 0.05 0.10 0.06
_rO_ Simplified 0.03 0.01 0.01 0.01
Detailed 0.02 0.01 0.01 0.01
##### Table 158: Statistical results for StrategyQA using Llama-2-7b-chat-hf on irrelevant responses.
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 9.67 5.34 3.97 6.05 sQ Simplified 1.46 1.10 0.62 1.03 Detailed 0.66 0.53 0.47 0.54 None 11.48 7.26 7.76 8.60 sK Simplified 1.40 1.44 0.79 1.16 Detailed 0.62 0.61 0.54 0.57 None 81.42 12.90 6.94 31.05 sV Simplified 7.57 2.92 1.19 3.57 Detailed 3.17 1.31 0.65 1.60 None 32.04 8.00 4.11 13.96 AQuA sO Simplified 4.53 2.70 0.83 2.48 Detailed 2.11 1.30 0.51 1.22 None 0.07 0.05 0.11 0.08 rQ Simplified 0.04 0.02 0.03 0.03 Detailed 0.04 0.02 0.05 0.03 None 0.05 0.01 0.03 0.03 rK Simplified 0.03 0.02 0.04 0.03 Detailed 0.02 0.02 0.05 0.03 None 0.09 0.03 0.04 0.05 rV Simplified 0.06 0.02 0.02 0.03 Detailed 0.05 0.01 0.03 0.03 None 0.05 0.03 0.06 0.05 rO Simplified 0.02 0.01 0.02 0.02 Detailed 0.02 0.00 0.01 0.01|sQ|None Simplified Detailed|9.67 5.34 3.97 6.05 1.46 1.10 0.62 1.03 0.66 0.53 0.47 0.54|
||sK|None Simplified Detailed|11.48 7.26 7.76 8.60 1.40 1.44 0.79 1.16 0.62 0.61 0.54 0.57|
||sV|None Simplified Detailed|81.42 12.90 6.94 31.05 7.57 2.92 1.19 3.57 3.17 1.31 0.65 1.60|
||sO|None Simplified Detailed|32.04 8.00 4.11 13.96 4.53 2.70 0.83 2.48 2.11 1.30 0.51 1.22|
||rQ|None Simplified Detailed|0.07 0.05 0.11 0.08 0.04 0.02 0.03 0.03 0.04 0.02 0.05 0.03|
||rK|None Simplified Detailed|0.05 0.01 0.03 0.03 0.03 0.02 0.04 0.03 0.02 0.02 0.05 0.03|
||rV|None Simplified Detailed|0.09 0.03 0.04 0.05 0.06 0.02 0.02 0.03 0.05 0.01 0.03 0.03|
||rO|None Simplified Detailed|0.05 0.03 0.06 0.05 0.02 0.01 0.02 0.02 0.02 0.00 0.01 0.01|
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 6.22 5.35 6.41 5.71 sQ Simplified 1.05 0.59 0.69 0.76 Detailed 0.50 0.33 0.38 0.39 None 6.13 7.26 8.85 7.09 sK Simplified 0.95 0.69 0.88 0.81 Detailed 0.47 0.37 0.52 0.45 None 46.97 17.60 17.59 25.55 sV Simplified 6.61 1.55 1.37 2.90 Detailed 2.58 0.85 0.66 1.28 None 28.58 14.76 9.00 16.38 StrategyQA sO Simplified 3.79 1.32 0.80 1.84 Detailed 1.88 0.80 0.51 0.99 None 0.03 0.05 0.09 0.06 rQ Simplified 0.03 0.01 0.02 0.02 Detailed 0.04 0.01 0.03 0.03 None 0.03 0.01 0.04 0.03 rK Simplified 0.03 0.01 0.03 0.02 Detailed 0.02 0.02 0.03 0.02 None 0.07 0.04 0.08 0.06 rV Simplified 0.05 0.02 0.03 0.03 Detailed 0.04 0.01 0.03 0.03 None 0.04 0.05 0.10 0.06 rO Simplified 0.03 0.01 0.01 0.01 Detailed 0.02 0.01 0.01 0.01|sQ|None Simplified Detailed|6.22 5.35 6.41 5.71 1.05 0.59 0.69 0.76 0.50 0.33 0.38 0.39|
||sK|None Simplified Detailed|6.13 7.26 8.85 7.09 0.95 0.69 0.88 0.81 0.47 0.37 0.52 0.45|
||sV|None Simplified Detailed|46.97 17.60 17.59 25.55 6.61 1.55 1.37 2.90 2.58 0.85 0.66 1.28|
||sO|None Simplified Detailed|28.58 14.76 9.00 16.38 3.79 1.32 0.80 1.84 1.88 0.80 0.51 0.99|
||rQ|None Simplified Detailed|0.03 0.05 0.09 0.06 0.03 0.01 0.02 0.02 0.04 0.01 0.03 0.03|
||rK|None Simplified Detailed|0.03 0.01 0.04 0.03 0.03 0.01 0.03 0.02 0.02 0.02 0.03 0.02|
||rV|None Simplified Detailed|0.07 0.04 0.08 0.06 0.05 0.02 0.03 0.03 0.04 0.01 0.03 0.03|
||rO|None Simplified Detailed|0.04 0.05 0.10 0.06 0.03 0.01 0.01 0.01 0.02 0.01 0.01 0.01|
-----
##### Figure 168: Visualization for AQuA using Llama-2-7b-chat-hf on irrelevant responses.
Figure 169: Visualization for GSM8K using Llama-2-7b-chat-hf on irrelevant responses.
-----
##### Figure 170: Visualization for StrategyQA using Llama-2-7b-chat-hf on irrelevant responses.
Figure 171: Visualization for ECQA using Llama-2-7b-chat-hf on irrelevant responses.
-----
##### Figure 172: Visualization for CREAK using Llama-2-7b-chat-hf on irrelevant responses.
Figure 173: Visualization for Sensemaking using Llama-2-7b-chat-hf on irrelevant responses.
-----
##### Figure 174: Visualization for Wiki tasks using Llama-2-7b-chat-hf on irrelevant responses.
-----
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 3.98 3.08 3.24 3.39 sQ Simplified 1.69 1.21 1.31 1.39 Detailed 0.61 0.46 0.43 0.48 None 4.30 3.92 4.94 4.38 sK Simplified 1.78 1.67 1.86 1.73 Detailed 0.59 0.55 0.59 0.55 None 25.58 8.46 6.55 12.49 sV Simplified 10.10 3.08 2.73 4.82 Detailed 2.82 1.15 0.77 1.48 None 15.93 7.37 3.43 8.40 Sensemaking sO Simplified 6.08 2.75 1.54 3.23 Detailed 2.00 1.10 0.56 1.14 None 0.03 0.02 0.07 0.04 rQ Simplified 0.03 0.02 0.03 0.03 Detailed 0.03 0.02 0.04 0.03 None 0.04 0.01 0.05 0.03 rK Simplified 0.03 0.02 0.03 0.02 Detailed 0.02 0.02 0.04 0.03 None 0.05 0.03 0.06 0.04 rV Simplified 0.05 0.02 0.05 0.04 Detailed 0.04 0.01 0.03 0.03 None 0.03 0.03 0.09 0.05 rO Simplified 0.03 0.01 0.02 0.02 Detailed 0.02 0.01 0.01 0.01|sQ|None Simplified Detailed|3.98 3.08 3.24 3.39 1.69 1.21 1.31 1.39 0.61 0.46 0.43 0.48|
||sK|None Simplified Detailed|4.30 3.92 4.94 4.38 1.78 1.67 1.86 1.73 0.59 0.55 0.59 0.55|
||sV|None Simplified Detailed|25.58 8.46 6.55 12.49 10.10 3.08 2.73 4.82 2.82 1.15 0.77 1.48|
||sO|None Simplified Detailed|15.93 7.37 3.43 8.40 6.08 2.75 1.54 3.23 2.00 1.10 0.56 1.14|
||rQ|None Simplified Detailed|0.03 0.02 0.07 0.04 0.03 0.02 0.03 0.03 0.03 0.02 0.04 0.03|
||rK|None Simplified Detailed|0.04 0.01 0.05 0.03 0.03 0.02 0.03 0.02 0.02 0.02 0.04 0.03|
||rV|None Simplified Detailed|0.05 0.03 0.06 0.04 0.05 0.02 0.05 0.04 0.04 0.01 0.03 0.03|
||rO|None Simplified Detailed|0.03 0.03 0.09 0.05 0.03 0.01 0.02 0.02 0.02 0.01 0.01 0.01|
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 4.99 3.02 3.68 3.78 sQ Simplified 1.45 0.88 0.83 1.03 Detailed 0.63 0.44 0.44 0.48 None 5.30 4.70 5.35 4.95 sK Simplified 1.42 1.24 1.26 1.25 Detailed 0.60 0.51 0.64 0.56 None 35.81 10.43 10.68 17.50 sV Simplified 8.51 2.51 1.99 3.95 Detailed 2.86 1.17 0.89 1.54 None 19.70 8.54 5.41 10.47 ECQA sO Simplified 5.10 2.19 1.18 2.61 Detailed 2.00 1.15 0.62 1.17 None 0.04 0.02 0.06 0.04 rQ Simplified 0.04 0.02 0.03 0.03 Detailed 0.03 0.02 0.04 0.03 None 0.02 0.01 0.05 0.03 rK Simplified 0.03 0.02 0.03 0.03 Detailed 0.02 0.03 0.04 0.03 None 0.06 0.03 0.07 0.05 rV Simplified 0.05 0.02 0.04 0.04 Detailed 0.04 0.01 0.03 0.03 None 0.03 0.03 0.06 0.04 rO Simplified 0.02 0.01 0.01 0.01 Detailed 0.02 0.01 0.01 0.01|sQ|None Simplified Detailed|4.99 3.02 3.68 3.78 1.45 0.88 0.83 1.03 0.63 0.44 0.44 0.48|
||sK|None Simplified Detailed|5.30 4.70 5.35 4.95 1.42 1.24 1.26 1.25 0.60 0.51 0.64 0.56|
||sV|None Simplified Detailed|35.81 10.43 10.68 17.50 8.51 2.51 1.99 3.95 2.86 1.17 0.89 1.54|
||sO|None Simplified Detailed|19.70 8.54 5.41 10.47 5.10 2.19 1.18 2.61 2.00 1.15 0.62 1.17|
||rQ|None Simplified Detailed|0.04 0.02 0.06 0.04 0.04 0.02 0.03 0.03 0.03 0.02 0.04 0.03|
||rK|None Simplified Detailed|0.02 0.01 0.05 0.03 0.03 0.02 0.03 0.03 0.02 0.03 0.04 0.03|
||rV|None Simplified Detailed|0.06 0.03 0.07 0.05 0.05 0.02 0.04 0.04 0.04 0.01 0.03 0.03|
||rO|None Simplified Detailed|0.03 0.03 0.06 0.04 0.02 0.01 0.01 0.01 0.02 0.01 0.01 0.01|
##### Table 159: Statistical results for ECQA using Llama-2- 7b-chat-hf on irrelevant responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
None 7.78 5.74 6.80 6.54
_sQ_ Simplified 1.60 1.16 1.03 1.23
Detailed 0.61 0.43 0.48 0.49
None 8.44 7.41 9.30 8.17
_sK_ Simplified 1.61 1.52 1.55 1.50
Detailed 0.59 0.48 0.67 0.56
None 56.30 20.20 16.14 28.98
_sV_ Simplified 10.10 2.93 2.23 4.61
Detailed 2.95 1.02 0.82 1.49
None 33.37 15.48 8.03 18.13
CREAK _sO_ Simplified 5.86 2.53 1.24 3.00
Detailed 2.02 1.01 0.58 1.13
None 0.03 0.05 0.12 0.07
_rQ_ Simplified 0.03 0.02 0.03 0.03
Detailed 0.04 0.02 0.04 0.03
None 0.03 0.02 0.07 0.04
_rK_ Simplified 0.02 0.02 0.03 0.02
Detailed 0.02 0.02 0.04 0.03
None 0.06 0.04 0.07 0.05
_rV_ Simplified 0.05 0.02 0.04 0.03
Detailed 0.04 0.01 0.03 0.03
None 0.04 0.05 0.12 0.07
_rO_ Simplified 0.03 0.01 0.02 0.02
Detailed 0.02 0.01 0.01 0.01
##### Table 160: Statistical results for CREAK using Llama- 2-7b-chat-hf on irrelevant responses.
##### Table 161: Statistical results for Sensemaking using Llama-2-7b-chat-hf on irrelevant responses.
**Mean Absolute Difference (MAD)**
**Dataset** **Curve** **Cot**
Early Middle Last All
Len 100 0.87 0.50 0.61 0.64
_sQ_ Len 500 0.58 0.30 0.51 0.44
Len 1000 0.48 0.25 0.46 0.38
Unpopular 1.00 0.59 0.66 0.74
Len 100 0.84 0.51 0.74 0.67
_sK_ Len 500 0.53 0.30 0.64 0.48
Len 1000 0.45 0.25 0.59 0.42
Unpopular 0.98 0.60 0.77 0.76
Len 100 5.33 1.19 0.55 2.14
_sV_ Len 500 3.13 0.87 0.51 1.38
Len 1000 2.55 0.75 0.49 1.16
Unpopular 9.83 1.51 0.79 3.59
Len 100 2.93 0.98 0.45 1.36
_sO_ Len 500 2.08 0.76 0.44 1.01
Wiki Len 1000 1.81 0.69 0.43 0.91
Unpopular 5.34 1.37 0.69 2.28
Len 100 0.03 0.01 0.02 0.02
_rQ_ Len 500 0.03 0.01 0.03 0.02
Len 1000 0.04 0.01 0.03 0.02
Unpopular 0.03 0.01 0.02 0.02
Len 100 0.03 0.01 0.02 0.02
_rK_ Len 500 0.02 0.01 0.03 0.02
Len 1000 0.02 0.01 0.03 0.02
Unpopular 0.03 0.01 0.02 0.02
Len 100 0.05 0.02 0.02 0.03
_rV_ Len 500 0.04 0.01 0.02 0.02
Len 1000 0.04 0.01 0.02 0.02
Unpopular 0.06 0.02 0.02 0.03
Len 100 0.02 0.01 0.01 0.01
_rO_ Len 500 0.02 0.01 0.00 0.01
Len 1000 0.02 0.01 0.00 0.01
Unpopular 0.02 0.01 0.01 0.01
##### Table 162: Statistical results for Wiki using Llama-2- 7b-chat-hf on irrelevant responses.
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|None 7.78 5.74 6.80 6.54 sQ Simplified 1.60 1.16 1.03 1.23 Detailed 0.61 0.43 0.48 0.49 None 8.44 7.41 9.30 8.17 sK Simplified 1.61 1.52 1.55 1.50 Detailed 0.59 0.48 0.67 0.56 None 56.30 20.20 16.14 28.98 sV Simplified 10.10 2.93 2.23 4.61 Detailed 2.95 1.02 0.82 1.49 None 33.37 15.48 8.03 18.13 CREAK sO Simplified 5.86 2.53 1.24 3.00 Detailed 2.02 1.01 0.58 1.13 None 0.03 0.05 0.12 0.07 rQ Simplified 0.03 0.02 0.03 0.03 Detailed 0.04 0.02 0.04 0.03 None 0.03 0.02 0.07 0.04 rK Simplified 0.02 0.02 0.03 0.02 Detailed 0.02 0.02 0.04 0.03 None 0.06 0.04 0.07 0.05 rV Simplified 0.05 0.02 0.04 0.03 Detailed 0.04 0.01 0.03 0.03 None 0.04 0.05 0.12 0.07 rO Simplified 0.03 0.01 0.02 0.02 Detailed 0.02 0.01 0.01 0.01|sQ|None Simplified Detailed|7.78 5.74 6.80 6.54 1.60 1.16 1.03 1.23 0.61 0.43 0.48 0.49|
||sK|None Simplified Detailed|8.44 7.41 9.30 8.17 1.61 1.52 1.55 1.50 0.59 0.48 0.67 0.56|
||sV|None Simplified Detailed|56.30 20.20 16.14 28.98 10.10 2.93 2.23 4.61 2.95 1.02 0.82 1.49|
||sO|None Simplified Detailed|33.37 15.48 8.03 18.13 5.86 2.53 1.24 3.00 2.02 1.01 0.58 1.13|
||rQ|None Simplified Detailed|0.03 0.05 0.12 0.07 0.03 0.02 0.03 0.03 0.04 0.02 0.04 0.03|
||rK|None Simplified Detailed|0.03 0.02 0.07 0.04 0.02 0.02 0.03 0.02 0.02 0.02 0.04 0.03|
||rV|None Simplified Detailed|0.06 0.04 0.07 0.05 0.05 0.02 0.04 0.03 0.04 0.01 0.03 0.03|
||rO|None Simplified Detailed|0.04 0.05 0.12 0.07 0.03 0.01 0.02 0.02 0.02 0.01 0.01 0.01|
|Dataset|Curve|Cot|Mean Absolute Difference (MAD) Early Middle Last All|
|---|---|---|---|
|Len 100 0.87 0.50 0.61 0.64 sQ Len 500 0.58 0.30 0.51 0.44 Len 1000 0.48 0.25 0.46 0.38 Unpopular 1.00 0.59 0.66 0.74 Len 100 0.84 0.51 0.74 0.67 sK Len 500 0.53 0.30 0.64 0.48 Len 1000 0.45 0.25 0.59 0.42 Unpopular 0.98 0.60 0.77 0.76 Len 100 5.33 1.19 0.55 2.14 sV Len 500 3.13 0.87 0.51 1.38 Len 1000 2.55 0.75 0.49 1.16 Unpopular 9.83 1.51 0.79 3.59 Len 100 2.93 0.98 0.45 1.36 sO Len 500 2.08 0.76 0.44 1.01 Wiki Len 1000 1.81 0.69 0.43 0.91 Unpopular 5.34 1.37 0.69 2.28 Len 100 0.03 0.01 0.02 0.02 rQ Len 500 0.03 0.01 0.03 0.02 Len 1000 0.04 0.01 0.03 0.02 Unpopular 0.03 0.01 0.02 0.02 Len 100 0.03 0.01 0.02 0.02 rK Len 500 0.02 0.01 0.03 0.02 Len 1000 0.02 0.01 0.03 0.02 Unpopular 0.03 0.01 0.02 0.02 Len 100 0.05 0.02 0.02 0.03 rV Len 500 0.04 0.01 0.02 0.02 Len 1000 0.04 0.01 0.02 0.02 Unpopular 0.06 0.02 0.02 0.03 Len 100 0.02 0.01 0.01 0.01 rO Len 500 0.02 0.01 0.00 0.01 Len 1000 0.02 0.01 0.00 0.01 Unpopular 0.02 0.01 0.01 0.01|sQ|Len 100 Len 500 Len 1000 Unpopular|0.87 0.50 0.61 0.64 0.58 0.30 0.51 0.44 0.48 0.25 0.46 0.38 1.00 0.59 0.66 0.74|
||sK|Len 100 Len 500 Len 1000 Unpopular|0.84 0.51 0.74 0.67 0.53 0.30 0.64 0.48 0.45 0.25 0.59 0.42 0.98 0.60 0.77 0.76|
||sV|Len 100 Len 500 Len 1000 Unpopular|5.33 1.19 0.55 2.14 3.13 0.87 0.51 1.38 2.55 0.75 0.49 1.16 9.83 1.51 0.79 3.59|
||sO|Len 100 Len 500 Len 1000 Unpopular|2.93 0.98 0.45 1.36 2.08 0.76 0.44 1.01 1.81 0.69 0.43 0.91 5.34 1.37 0.69 2.28|
||rQ|Len 100 Len 500 Len 1000 Unpopular|0.03 0.01 0.02 0.02 0.03 0.01 0.03 0.02 0.04 0.01 0.03 0.02 0.03 0.01 0.02 0.02|
||rK|Len 100 Len 500 Len 1000 Unpopular|0.03 0.01 0.02 0.02 0.02 0.01 0.03 0.02 0.02 0.01 0.03 0.02 0.03 0.01 0.02 0.02|
||rV|Len 100 Len 500 Len 1000 Unpopular|0.05 0.02 0.02 0.03 0.04 0.01 0.02 0.02 0.04 0.01 0.02 0.02 0.06 0.02 0.02 0.03|
||rO|Len 100 Len 500 Len 1000 Unpopular|0.02 0.01 0.01 0.01 0.02 0.01 0.00 0.01 0.02 0.01 0.00 0.01 0.02 0.01 0.01 0.01|
-----
| [
"Yanhong, Li",
"Tianyi, Zhou",
"Ming, Li"
] | 2024-10-31T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2410.23743 | https://arxiv.org/abs/2410.23743 | https://www.semanticscholar.org/paper/4bda83ec074f38d4a23ef6dacbfe488e76813ea5 |
What Kind of Pretraining Data Do Large Language Models Rely on When Doing Reasoning? | The capabilities and limitations of Large Language Models (LLMs) have been sketched out in great detail in recent years, providing an intriguing yet conflicting picture. On the one hand, LLMs demonstrate a general ability to solve problems. On the other hand, they show surprising reasoning gaps when compared to humans, casting doubt on the robustness of their generalisation strategies. The sheer volume of data used in the design of LLMs has precluded us from applying the method traditionally used to measure generalisation; train-test set separation. In this work, we study what kind of generalisation strategies LLMs employ when performing reasoning tasks by investigating the pretraining data they rely on. For two models of different sizes (7B and 35B) and 2.5B of their pretraining tokens, we identify what documents impact three simple mathematical reasoning tasks and contrast this to the data that are influential for answering factual questions. We find that, while the models rely on mostly distinct sets of data for each factual question, documents often have a similar influence on different reasoning questions with the same task, indicating the presence of procedural knowledge. We further find that the answers to the factual questions often show up in the most influential data. However, for the reasoning questions the answers usually do not show up as highly influential, nor do the answers to the intermediate reasoning steps. When we characterise the top portion of the ranking for the reasoning questions qualitatively, we find that the influential documents often contain procedural knowledge, like demonstrating how to obtain the solution using formulae or code. Our findings indicate that the generalisation strategy the model uses when doing reasoning is unlike retrieval, but more like a strategy using many documents doing a similar form of reasoning. | null | # WHAT KIND OF PRETRAINING DATA DO LLMS RELY
## ON WHEN DOING REASONING?
**Anonymous authors**
Paper under double-blind review
ABSTRACT
The capabilities and limitations of Large Language Models (LLMs) have been
sketched out in great detail in recent years, providing an intriguing yet conflicting
picture. On the one hand, LLMs demonstrate a general ability to solve problems.
On the other hand, they show surprising reasoning gaps when compared to humans, casting doubt on the robustness of their generalisation strategies. The sheer
volume of data used in the design of LLMs has precluded us from applying the
method traditionally used to measure generalisation; train-test set separation. In
this work, we study what kind of generalisation strategies LLMs employ when
performing reasoning tasks by investigating the pretraining data they rely on. For
two models of different sizes (7B and 35B) and 2.5B of their pretraining tokens,
we identify what documents impact three simple mathematical reasoning tasks
and contrast this to the data that are influential for answering factual questions.
We find that, while the models rely on mostly distinct sets of data for each factual
question, documents often have a similar influence on different reasoning questions with the same task, indicating the presence of procedural knowledge. We
further find that the answers to the factual questions often show up in the most
influential data. However, for the reasoning questions the answers usually do not
show up as highly influential, nor do the answers to the intermediate reasoning
steps. When we characterise the top portion of the ranking for the reasoning questions qualitatively, we find that the influential documents often contain procedural
knowledge, like demonstrating how to obtain the solution using formulae or code.
Our findings indicate that the generalisation strategy the model uses when doing
reasoning is unlike retrieval, but more like a strategy using many documents doing
a similar form of reasoning.
1 INTRODUCTION
Current advancements in artificial intelligence are characterised by the increasing scale of datasets,
computational power, and model size (Kaplan et al., 2020; Hoffmann et al., 2022). While one of the
manifestations of this approach, Large Language Models (LLMs), is rapidly saturating benchmarks
measuring reasoning capabilities (e.g. Cobbe et al., 2021; Hendrycks et al., 2021), the debate over
whether they exhibit ‘genuine understanding’ is ongoing (as reviewed by Mitchell & Krakauer,
2023). The well-documented robust and versatile reasoning abilities (e.g. Webb et al., 2023; 2024;
McLeish et al., 2024) sharply contrast with the line of work highlighting the brittleness of LLM
reasoning (Razeghi et al., 2022; McCoy et al., 2023; Ullman, 2023; Wu et al., 2024; Mahowald
et al., 2024). A finding common to these works is that LLM reasoning depends on the frequency of
similar problems in the training data.
A key reason for why benchmark saturation cannot be taken at face value is the issue of data contamination; benchmark data often appear in the pretraining set. Where we typically measure generalisation in machine learning by separating the test data from the training data, the trillions of tokens
used in the design of current state-of-the-art models cannot reasonably be separated from benchmarks anymore. Recent works have documented the extent of the contamination issue (Brown et al.,
2020; Touvron et al., 2023; Gunasekar et al., 2023; Yang et al., 2023; Deng et al., 2024), showing
that many common benchmarks have a high percentage of contaminated data. Additionally, Yang
et al. (2023) show that even rephrased benchmark data that elude N-gram-based detection meth
**000**
**001**
**002**
**003**
**004**
**005**
**006**
**007**
**008**
**009**
**010**
**011**
**012**
**013**
**014**
**015**
**016**
**017**
**018**
**019**
**020**
**021**
**022**
**023**
**024**
**025**
**026**
**027**
**028**
**029**
**030**
**031**
**032**
**033**
**034**
**035**
**036**
**037**
**038**
**039**
**040**
**041**
**042**
**043**
**044**
**045**
**046**
**047**
**048**
**049**
**050**
**051**
**052**
**053**
-----
**054**
**055**
**056**
**057**
**058**
**059**
**060**
**061**
**062**
**063**
**064**
**065**
**066**
**067**
**068**
**069**
**070**
**071**
**072**
**073**
**074**
**075**
**076**
**077**
**078**
**079**
**080**
**081**
**082**
**083**
**084**
**085**
**086**
**087**
**088**
**089**
**090**
**091**
**092**
**093**
**094**
**095**
**096**
**097**
**098**
**099**
**100**
**101**
**102**
**103**
**104**
**105**
**106**
**107**
Figure 1: The pipeline for creating rankings of the most influential pretraining documents for a
question-completion pair (query) using influence functions. The documents at the top of the ranking
influence the likelihood of the completion positively, and the bottom negatively. We create rankings
for a set of 40 reasoning and 40 factual queries over 5 million pretraining documents (2.5B tokens)
for two models of different sizes (Cohere’s Command R series, 7B and 35B).
ods can impact performance, further complicating the issue. However, it is unclear how and when
state-of-the-art LLMs rely on contaminated data to perform reasoning.
This raises the question: “what kind of generalisation strategy do LLMs use when reasoning?” In
this work, we take a complementary approach to most interpretability research by focusing on the
pretraining data used by language models to generalise, rather than interpreting the model weights
themselves. We investigate which data influence the model’s produced reasoning traces and how
those data relate to the specific problems being addressed. Are models simply ‘retrieving’ answers
from previously seen pretraining data and reassembling them, or are they employing a more robust
strategy for generalisation? We use a technique from robust statistics adapted to large-scale Transformers (Grosse et al., 2023) to compute the influence of pretraining documents on the likelihood
of prompt-completions pairs under a trained model (see Figure 1). We reason as follows; in the
extreme case, a language model answering reasoning questions may rely heavily on retrieval from
parametric knowledge influenced by a limited set of documents within its pretraining data. In this
scenario, a few key documents containing the information to be retrieved (i.e. the reasoning traces
and the answer to the question) contribute significantly to the model’s output, while many other documents play a minimal role, primarily supporting aspects such as general coherence and formatting.
Conversely, at the other end of the spectrum, the model may draw from a broad range of documents
that are more abstractly related to the question, with each contributing a relatively small amount
to the final output. We propose that robust reasoning likely occurs somewhere between these two
extremes, where the model utilises a wider range of pretraining data than in pure retrieval tasks,
but with each document exerting less influence individually compared to those containing directly
retrievable information.
We investigate the pretraining data influential for a set of factual and reasoning questions (specifically, two-step arithmetic, calculating slopes, and solving linear equations) for two LLMs of different sizes (7B and 35B). Our findings are as follows:
1. Our experiments indicate that documents that are influential for reasoning contain
**procedural knowledge: for reasoning questions with the same underlying task, there is**
a significant correlation between the influence scores of documents for different queries,
indicating that the same document often contributes similarly to many questions that require
different arithmetic but the same procedure.
2. The influence of individual documents is on average much lower for reasoning ques**tions than for factual questions: we look at the influence per nat of query information for**
-----
**108**
**109**
**110**
**111**
**112**
**113**
**114**
**115**
**116**
**117**
**118**
**119**
**120**
**121**
**122**
**123**
**124**
**125**
**126**
**127**
**128**
**129**
**130**
**131**
**132**
**133**
**134**
**135**
**136**
**137**
**138**
**139**
**140**
**141**
**142**
**143**
**144**
**145**
**146**
**147**
**148**
**149**
**150**
**151**
**152**
**153**
**154**
**155**
**156**
**157**
**158**
**159**
**160**
**161**
influential documents, and find that usually it is much lower for reasoning questions than
for factual questions, and the overall magnitude is less volatile, indicating that the models
most likely generalise from more documents for reasoning than for factual retrieval.
3. For the factual questions, the answer often shows up as highly influential, whereas
**for reasoning questions it does not: we look at the top 500 influential documents for**
each query, and find the answer to factual questions relatively often, and almost never for
reasoning questions.
4. We find evidence for code being important for reasoning: for both models, code data is
heavily overrepresented w.r.t. the training distribution for the top portions of the positively
and negatively influential rankings.
Our findings indicate a generalisation strategy for reasoning that is unlike retrieval from parametric
knowledge, and more like learning procedural knowledge from many different documents concerning a similar form of reasoning. However, there are important limitations to our approach, most
notably that we do not calculate influence on the entire training set, which is intractable. This means
there can be other interpretations of our results; e.g. perhaps the answer to the reasoning questions is
simply much less frequent and would show up as highly influential when we look at the entire pretraining set. Or, relatedly, perhaps the magnitude of influence for reasoning questions is on average
much lower because the highly influential documents are in the wider pretraining set, but not in our
sample. We investigate alternative hypotheses such as these in our experiments, and present in our
findings the most likely explanation of the results given the evidence.
The subfield with the aim of understanding how Large Language Models generalise is growing
rapidly. This question can be approached in different ways, and many recent works interpret weights
of smaller models on synthetic tasks to explain particular phenomena that we observe in language
models at scale such as grokking (Wang et al., 2024), in-context learning (Olsson et al., 2022; Singh
et al., 2024), or superposition (Elhage et al., 2022; Bricken et al., 2023). Scaling interpretability
methods to modern-sized LLMs is challenging for many reasons, of which one is computational
tractability. Nonetheless, there are a few works that apply techniques from interpretability to language models at scale. Templeton et al. (2024) use spare autoencoders to extract interpretable features from Claude 3 Sonnet, and demonstrate how to use these features to control model outputs.
Grosse et al. (2023) adapt EK-FAC influence functions (George et al., 2018) to large-scale Transformers, and use them to understand what kind of pretraining data influence completions of models
up to 50B parameters. The authors show, among many other things, that larger models rely on pretraining data that is more abstractly related to the completion than smaller models. In this work,
we build on the results in Grosse et al. (2023), leaning heavily on their efforts to make influence
functions tractable at scale, but focus instead on understanding reasoning specifically. However,
EK-FAC influence functions are designed to estimate what happens to model weights and continuous differentiable functions thereof such as the likelihood of completions. It is currently unclear
whether that translates to influence on the accuracy of completions. We verify this empirically and
motivate our main experiments by showing on two reasoning datasets for a 7B model that indeed
EK-FAC influence functions estimate effect on accuracy of completions.
2 COMPUTING THE INFLUENCE OF A DOCUMENT ON A COMPLETION
**Background on influence functions. Given a pretrained model θ[u]** that parametrises a distribution
pletion,over next tokens conditioned on a prompt yp = {y1, . . ., yn} a prompt, and p uθ indicates the parameters are not necessarily trainedu (yc | yp) (where yc = {y1, . . ., ym} is a comto convergence), we are interested in finding data from the pretraining set D = {xi}i[N]=1 [that in-]
fluence the completion. Put differently, we want to know which examples in the pretraining set
‘caused’ a completion. To this end, we use EK-FAC influence functions for large-scale transformers as proposed by Grosse et al. (2023). The parameters θ[u] are typically found by performing
a gradient-based iterative algorithm on an objective function and stopping based on some criterion. We want to know the influence of a training document xj on the parameters θ[u] (which
can be reformulated to influence on any continuous differentiable function of ∈D **_θ[u]_** using the chainrule). We can calculate influence exactly by removing xj from the original training set, re-training
the model, and comparing the resulting set of parameters (or a function thereof) to the originally
trained model. This is intractable for any interesting number of documents and parameters. Influ
-----
**162**
**163**
**164**
**165**
**166**
**167**
**168**
**169**
**170**
**171**
**172**
**173**
**174**
**175**
**176**
**177**
**178**
**179**
**180**
**181**
**182**
**183**
**184**
**185**
**186**
**187**
**188**
**189**
**190**
**191**
**192**
**193**
**194**
**195**
**196**
**197**
**198**
**199**
**200**
**201**
**202**
**203**
**204**
**205**
**206**
**207**
**208**
**209**
**210**
**211**
**212**
**213**
**214**
**215**
ence functions estimate this counterfactual by taking a Taylor expansion of the response function[1]:
**_θentropy. The first-order Taylor approximation around[⋆](ϵ) = arg minθ∈RD_** _N[1]_ _i≠_ _j_ _[L][(][x][i][,][ θ][) +][ ϵ][L][(][x][j][,][ θ][)][, where] ϵ = 0[ L] of the response function is used to[(][·][)][ is a loss function, like the cross-]_
P
reason about how the optimal parameters change if you change ϵ, which changes the weight of the
document we want to know the influence of. Using the implicit function theorem, influence can then
be defined as follows:
**_θ⋆_** (x) = _[d][θ][⋆]_
_I_ _dϵ_
= **H[−][1]** **_θ_** (x, θ[⋆])
_ϵ=0_ _−_ _∇_ _L_
Where H = ∇θ[2] _[J][ (][θ][p][,][ D][)][ is the Hessian of the pretraining objective. Using the chain-rule, we]_
estimate influence of a training document x = _x1, . . ., xk_ on the completion given a prompt by
_{_ _}_
approximating the following:
_f_ (x) = **_θf_** (θ[u])[T] **H[−][1]** **_θ_** (x, θ[u]) (1)
_I_ _−∇_ _∇_ _L_
Since we are investigating models with billions of parameters D, the above Hessian is intractable,
and we estimate it using EK-FAC estimation. For a detailed derivation of this estimation, the reader
is referred to the derivations in Section 2 and 3 in Grosse et al. (2023). We will mention here that
EK-FAC involves estimating two expectations Epθ [∆θ∆θ[T] ] and Epθ [AA[T] ] where A denotes the
activations of the model using the MLP parameters in θ[u]. To make this estimation tractable we
make a number of simplifying assumptions across all our estimations, like independence between
layers (Grosse et al., 2023).
**Adapting EK-FAC influence functions to our problem. In reality, we are not interested in the**
influence on the trained parameters, but in the influence on the accuracy of a trained language model
when answering questions. However, we can only calculate the influence on a continuous differentiable function, and to the best of our knowledge, no work has shown that influence functions also
estimate effect on the underlying accuracy of text produced by next-word prediction. As a proxy
for accuracy, we take as a continuous differentiable function the cross-entropy loss function. In Appendix A.1 we show that the influence calculated in this way surfaces documents that have a causal
effect on the accuracy of a 7B model fine-tuned to do reasoning and reading comprehension tasks.
Namely, if we remove documents from the fine-tuning data according to their influence and re-train
the model, the accuracy drops significantly more than if we take out the same amount of documents
randomly, or the same amount of documents using gradient similarity.
It is only reasonably possible to loop over the pretraining data sample once, and to store more than
a single query gradient in memory (which has the same memory complexity as the model itself),
Grosse et al. (2023) use singular-value decomposition (SVD). Instead of SVD, we use approximate
SVD with a probabilistic algorithm (Halko et al., 2011), which significantly speeds up the computation of the query gradients.
Using this procedure to approximate Equation 1, we can get scores for documents from the pretraining data D that represent the influence they have on a query completion yc given a prompt yp. Given
the counterfactual question approximated by the response function, an influence score of 1 implies
the log-probability of the sequence yc is increased by 1 (Grosse et al., 2023). To compare influence
scores across queries with different completions (and token lengths), we normalise the scores for
each query by the information content of its completion yc, measured in nats. The information conover documents from most- to least influential (Figure 1), where a score can be interpreted as thetent of a query is defined as I(yc) = − log (pθu (yc | yp)). The influence scores induce a ranking
increase in log-probability per nat of query information.
3 EXPERIMENTAL SETUP
The goal of this work is to investigate the generalisation strategy LLMs employ when doing reasoning. We do this by looking at the training data that is influential for model completions involving
reasoning. We contrast this to completions that require retrieving facts from parametric knowledge.
In the extreme case, the model is simply retrieving the answer to the reasoning question as well as
1The true response function to derive influence functions for unconverged parameters is the Proximal Bregman function, but this step is left out here. The reader is referred to a derivation in Grosse et al. (2023).
-----
**216**
**217**
**218**
**219**
**220**
**221**
**222**
**223**
**224**
**225**
**226**
**227**
**228**
**229**
**230**
**231**
**232**
**233**
**234**
**235**
**236**
**237**
**238**
**239**
**240**
**241**
**242**
**243**
**244**
**245**
**246**
**247**
**248**
**249**
**250**
**251**
**252**
**253**
**254**
**255**
**256**
**257**
**258**
**259**
**260**
**261**
**262**
**263**
**264**
**265**
**266**
**267**
**268**
**269**
Table 1: Example from the reasoning set
that involves simple two-step arithmetic. The
completion is by the 7B model.
**Reasoning Query Set**
_Prompt_
Calculate the answer: (7 - 4) * 7
Think step-by-step.
_Completion_
First, let’s evaluate the expression
inside the parentheses:
7 - 4 = 3
Now, let’s multiply the result by 7:
3 * 7 = 21
Therefore, the answer to the expression is 21.
Table 2: Example from the factual set that requires retrieving the right answer. The completion is by the 7B model.
**Factual Query Set**
_Prompt_
What is the tallest mountain in the world
and how tall is it?
_Completion_
The tallest mountain in the world is Mount
Everest, which is located in the Himalayas.
It is 29,029 feet tall.
the correct reasoning traces. We can find evidence for or against this qualitatively and quantitatively.
Qualitatively, we can inspect the most influential data and see if the answer to the question is there.
If not, we can further characterise how the documents are related to the question. Quantitatively,
if the model is simply ‘retrieving’ the answer from its weights, the distribution of influence over
training documents for reasoning questions should be similar to the distribution of influence over
documents for factual questions. However, for more robust reasoning, we expect the model to rely
on a relatively larger set of documents, and each document to have less influence on the completion,
meaning the model is generalising from more different documents. Intuitively, if the model is generalising, the completion is not just influenced by a few similar examples, but by many abstractly
related documents.
**Query set. To test these hypotheses, we collect a query set with different types of questions, of**
which 40 are reasoning questions and 40 factual questions. Note that it is only tractable to loop over
the pretraining sample once, so we need to be able to store all query gradients in memory and cannot
go beyond about 80 questions. For the reasoning questions, we identify two types of mathematical
reasoning each model can do robustly with zero-shot chain-of-thought (Wei et al., 2022). We do this
by testing the models on larger sets of 100 questions for each type of reasoning, and selecting tasks
where it gets at least 80% correct. This surfaces simple two-step arithmetic for the 7B model (Table
1), calculating the slope between two numbers for both models, and solving for x in linear equations
for the 35B model (see Appendix A.3 for prompt-completion pairs of both these reasoning types).
We ensure no query ever requires outputting a fraction. To make the results between 7B and 35B
more comparable, we use the same slope questions for both models. For the 40 factual questions,
we make sure the model gets half right and half wrong, allowing us to identify failures of retrieving
facts from parametric knowledge, and also ensure 16 of 40 overlap between models. We calculate
influence over the full completion, which includes the chain-of-thought in the reasoning case.
**Documents set. We want to compare the influence of pretraining data on reasoning by differently**
sized models (7B and 35B), so we select two models that are trained on the same data. The EK-FAC
estimation of the Hessian only needs to be done once per model, but the other terms in Equation
1 require a forward- and backward-pass through the model per document-query pair. This means
that obtaining a ranking over pretraining data for a single query has the a computational complexity
similar to pretraining itself. To overcome this issue, we sample a set of documents from the pretraining data that covers multiple examples from each batch seen during pretraining, giving a total of 5
million documents (approximately 2.5B tokens) distributed similary as the training distribution. We
batch queries and obtain the influence scores in parallel. Each document contains 512 tokens[2].
**EK-FAC estimation. To estimate the Hessian for the 7B and 35B models (the expectations from**
Section 2), we randomly sample 100,000 documents equally spread-out through pre-training for
both models. Details on how exactly we approximate the Hessian are in Appendix A.2.
2We choose 512 tokens instead of the same context length as seen during training because qualitatively
interpreting documents of more than 512 tokens is difficult and usually they span many different topics.
-----
**270**
**271**
**272**
**273**
**274**
**275**
**276**
**277**
**278**
**279**
**280**
**281**
**282**
**283**
**284**
**285**
**286**
**287**
**288**
**289**
**290**
**291**
**292**
**293**
**294**
**295**
**296**
**297**
**298**
**299**
**300**
**301**
**302**
**303**
**304**
**305**
**306**
**307**
**308**
**309**
**310**
**311**
**312**
**313**
**314**
**315**
**316**
**317**
**318**
**319**
**320**
**321**
**322**
**323**
**Models. We look at two models of different sizes, 7B and 35B, which are base and supervised**
fine-tuned versions of Cohere’s Command R series[3]. We estimate the second order information
and calculate document gradients using the base models, and generate completions and calculate
the query gradients using the models fine-tuned with supervised instruction-tuning. The reason for
choosing this setup is that the fine-tuned models are much better at instruction following. This means
we are assuming the EK-FAC for the fine-tuning phase is the identity (Bae et al., 2024), and we are
focusing only on the influence of the pretraining data and ignoring the fine-tuning data.
4 EXPERIMENTS AND RESULTS
4.1 QUALITATIVE ANALYSIS
We perform three types of qualitative analyses on the highest ranked portions for each query; we
search for the answer, we characterise the documents’ relation to the reasoning queries, and we
investigate what source datasets they are from (e.g. code, arXiv, Wikipedia, etc.).
**_Finding 1: The answer to the factual questions shows up relatively often in the top influ-_**
**_ential documents for the factual questions, and almost never for the reasoning questions._**
To find the answer to the questions in the queries in
the top documents manually, we construct keywords
for each query that should be in the document if the
answer is there. For example, for the factual query in
Table 2, the keywords are “tallest”, “highest”, “Mount
Everest”, “29,029”, “8,848”, “8,849” (these last two
are chosen because they represent the height in meters
that are both considered correct). For the reasoning
queries, we construct many more keywords per query,
but some examples for the example in Table 2 are 7−4,
3, 21, 3 ∗ 7, as well as replacing the operations with Figure 2: We search for the answer in the
words like ‘minus’ and ‘times’, and different ways of top 500 documents for each query, and find
representing the content in this query. For details on it relatively frequently for the factual queswhich keywords we use for each query, see Appendix tions. For the reasoning questions, we find
A.4. We determine the occurrence of each of these the answer twice for the 7B model, and
keywords independently in the top 100 documents for never for the 35B. Both those times, the aneach query (meaning even if just the keyword ‘7’ is swer is spread out in the form of reasoning
present it would be a hit), resulting in many false- traces over two documents.
positives. We manually look over the hits to find the
answer. On top of that, we craft a prompt for Command R+ (a more capable 100B model) to find the answer in a query-document pair, and use it to
find the answer in the top 500 documents for each query (the prompt is given in Appendix A.5).
Then, we manually look over the hits and keep track of documents that have the answer to a query.
We do this after our manual search, and verify that Command R+ finds all, and more, of the answers
we have identified ourselves. We look for the full answer in a single document. For the reasoning
queries, we also count partial answers in separate documents if they combine to the full answer. For
example, if one document contains 7 − 4 = 3, and another 3 ∗ 7 = 21, we consider that an answer
to the question. In the process, we keep track of other simple arithmetic occurring in documents,
which we will use below to comment on the likelihood of the answer to the reasoning being present
in the broader set of 5 million documents.
The results are shown in Figure 2. For the 7B model, we find the answer in the top 500 documents
for 55% of the queries, whereas for the reasoning documents it happens for 7.4% of the queries.
For the 35B model, the answer to the factual queries shows up in the top influential documents 30%
of the time, and never for the reasoning set. We expect the answer shows up less frequently for
the 35B model simply because the factual questions are much more ‘niche’. For example, one of
the questions the model gets correct is “In which year did the Beinecke Library open?”. Moreover,
in certain cases, the answer shows up multiple times in the top 500 documents. If we count all
these separately, as opposed to a binary ‘yes’ or ‘no’ per query on which the results in Figure 2
[3https://cohere.com/command](https://cohere.com/command)
-----
**324**
**325**
**326**
**327**
**328**
**329**
**330**
**331**
**332**
**333**
**334**
**335**
**336**
**337**
**338**
**339**
**340**
**341**
**342**
**343**
**344**
**345**
**346**
**347**
**348**
**349**
**350**
**351**
**352**
**353**
**354**
**355**
**356**
**357**
**358**
**359**
**360**
**361**
**362**
**363**
**364**
**365**
**366**
**367**
**368**
**369**
**370**
**371**
**372**
**373**
**374**
**375**
**376**
**377**
are based, answers to questions show up 30 times for the factual questions in the 7B rankings,
and twice for the reasoning questions. For the 35B, the same result is 15 times for the factual
questions, and never for the reasoning questions. Interestingly, the answer to the factual questions
often shows up in different languages, like Spanish, French, or Portuguese. We give two examples
in Appendix A.8. One explanation for the answer to the reasoning questions not showing up, is
that they are simply not present in the larger set of 5 million documents. This could be the case
for the slope and linear equation questions, which also means they are relatively infrequent in the
larger pretraining set, but for the arithmetic questions this seems highly unlikely. These questions all
contain numbers below 10, meaning the answer is already there if one document contains 7 − 4 = 3
and another 3 ∗ 7 = 21. By keeping track of the simple arithmetic we encounter in our manual
qualitative analysis, we identify 4 documents that contain the answers to the part between brackets
for arithmetic queries that do not show up in the top 500 documents for the relevant queries, and
expect that in the broader set of 5 million documents there will be many more. Nonetheless, there
is a possibility that this is not the case, which we discuss further in the limitations section below in
relation to the quantitative findings.
**_Finding 2: We find that influential documents for the reasoning queries are often doing a similar_**
**_form of step-by-step reasoning, e.g. also arithmetic. Further, we find that the influential docu-_**
**_ments often implement the solution to the reasoning question in code or general math._**
For the slope queries (of which we have 20 which are the same for both models), many different
documents surface as highly influential that show how to calculate the slope between two points in
code or math. For the 7B model, documents that present procedural knowledge on how to calculate
the slope in either code or math show up in the top 100 documents for 16/20 queries, and for the
35B model they show up for all queries. All together, we manually find 11 unique documents that
implement the slope in code in the top 100 documents, and 16 that present equations for calculating
the slope. The 7B model relies on 18 of these documents for its completions (meaning 18 different
ones appear in the top 100 documents for all queries), and the 35B on 9. An example of a highly
influential document implementing the solution in JavaScript (left) and in math (right):
function eqOfLine(x1, y1, x2, y2) { If a straight line passing through the points
if (x1 === x2) { _P (x1, y1), Q(x2, y2)_ is making an angle _θ_ with the
Math Snippet
If a straight line passing through the points
_P (x1, y1), Q(x2, y2)_ is making an angle _θ_ with the
positive X-axis, then the slope of the straight line is:
(A) _x[y][2+]2+[y]x[1]1_
(B) θ
(C) _x[y][2]2_ _[−]−[y]x[1]1_
(D) sin θ
**Solution:**
Correct answer: (C)
Code Snippet
function eqOfLine(x1, y1, x2, y2) {
if (x1 === x2) {
// Handle a vertical line
return ‘x = ${x1}‘;
} else {
// Calculate the slope
const m = (y2 - y1) / (x2 - x1);
// Use point-slope form
const b = y1 - m * x1;
// Return y = mx + b
return ‘y = ${m}x + ${b}‘;
}
}
We prompt Command R+ to further characterise the top 500 documents for each query by choosing
from a set of provided keywords, and find that often the documents are doing similar arithmetic on
other numbers (e.g. much larger or smaller), doing similar arithmetic on similar numbers (for the
slope questions), or similar algebraic operations on similar numbers (for solving linear equations).
We present the detailed results and prompt for this analysis in Appendix A.8.
**_Finding 3: For factual queries, the most frequent positively influential data sources include_**
**_Wikipedia and trivia, while for reasoning questions, key sources consist of math, StackExchange,_**
**_ArXiv, and code._**
We look at the type of source datasets that represent the most influential documents. Specifically, we
count the source datasets of the top k documents with k ∈{50, 500, 5000, 50000, 500000} (recall
we have 5 million documents, and about half of them are usually positively influential), and compare
the count to the pretraining distribution. We present the details of the results in Appendix A.8, but
mention here that code data is highly influential for reasoning. StackExchange as a data source has
ten times more influential data in the top portions of the rankings than expected if the influential data
was randomly sampled from the pretraining distribution. Other code sources are twice as influential
as expected when drawing randomly from the pretraining distribution for k = 50 up to k = 50000.
In Appendix A.8.1, we confirm that code is influential for reasoning.
-----
**378**
**379**
**380**
**381**
**382**
**383**
**384**
**385**
**386**
**387**
**388**
**389**
**390**
**391**
**392**
**393**
**394**
**395**
**396**
**397**
**398**
**399**
**400**
**401**
**402**
**403**
**404**
**405**
**406**
**407**
**408**
**409**
**410**
**411**
**412**
**413**
**414**
**415**
**416**
**417**
**418**
**419**
**420**
**421**
**422**
**423**
**424**
**425**
**426**
**427**
**428**
**429**
**430**
**431**
Figure 3: Top row (left and middle): The ranked influence scores per query nat for each query
shown separately in log-log space. We observe; the results seem to follow power laws (linear in loglog space), everything is shifted up for the 35B model (right), generally the scores for the reasoning
documents are lower for the 7B model, and for the 35B model there is less variance in magnitude of
influence for reasoning queries than for factual queries. Bottom row (left and middle): The total
influence per nat of query completion information for different portions of the positive ranking over
documents, left for the 7B model, right for the 35B. We observe the total influence per nat is usually
lower for reasoning questions than for factual questions, and that the influence per document varies
considerably more for factual questions than for reasoning questions, especially for the 35B model.
**Right column: The correlations between the influences of all documents for queries of a specific**
type. We observe that there is only a correlation of influence for queries of the same reasoning type.
4.2 QUANTITATIVE ANALYSIS
**Power law shape. Similarly to what Grosse et al. (2023) report, we observe that the top parts of**
the rankings over documents follow a power law characterised by a linear relation between rank and
influence per nat in log-log space (shown in the top row of Figure 3). We can fit linear functions
to the rankings in log-log space, and analyse the slopes to comment on the sparsity of the rankings
(i.e. how many documents do models rely on for a completion). Specifically, we perform linear
regression on the log-log top 500 rankings of each query, and report the slopes in Table 3. We
find that, counter to our expectation, the slopes for the reasoning questions by the 35B are steeper
than for the factual questions. For the 7B, the slopes for the reasoning questions the model gets
right are on average also a bit steeper than for the factual questions, but the effect goes away when
comparing slopes for all factual vs. reasoning queries. After qualitatively inspecting the queries
for the 35B model with the steepest slope, we believe an explanation for this result may be ‘noise’
in the influence scores. For example, the query with the steepest slope (α = −0.45) has as the
most influential document a document that is seemingly entirely unrelated to the query. Namely, the
query asks the question “What is the slope of the line passing through the points (41, 23) and (18,
_92)? Think step-by-step.”, and the top influential document is a snippet about the lunar eclipses and_
when and where they can be viewed which does not have high N-gram overlap with the query either.
This is the only query for which we observe an unrelated top 1 document, but for the 35B model we
qualitatively observed seemingly irrelevant documents in the rankings more often (in the 7B we did
not observe this). This connects to a finding from literature that for large models influence functions
sometimes surface documents with high gradient norms that are unrelated to the query (Barshan
et al., 2020; Grosse et al., 2023; Choe et al., 2024). As Grosse et al. (2023) note, it is currently
unclear whether this is true noise, or whether these are genuinely influential for the completions.
**_Finding 4: When reasoning, the model on average relies on each individual document less than_**
**_when answering factual questions, indicating it is generalising from a larger set of documents._**
**_The effect is more pronounced for the larger model._**
To characterise the full rankings beyond the top 500 documents for factual questions and reasoning
-----
**432**
**433**
**434**
**435**
**436**
**437**
**438**
**439**
**440**
**441**
**442**
**443**
**444**
**445**
**446**
**447**
**448**
**449**
**450**
**451**
**452**
**453**
**454**
**455**
**456**
**457**
**458**
**459**
**460**
**461**
**462**
**463**
**464**
**465**
**466**
**467**
**468**
**469**
**470**
**471**
**472**
**473**
**474**
**475**
**476**
**477**
**478**
**479**
**480**
**481**
**482**
**483**
**484**
**485**
questions quantitatively, we show the total influence for different percentiles of the positive parts
of the rankings in the bottom row of Figure 3. The results depict the total amount of influence
contained in the top-k percentile of the positively ranked documents; e.g. the 20th percentile
contains 20% of the positive documents for a query, and the amount of total influence shown is
the sum of all document influences of that part of the ranking. We observe two things for both
models. Firstly, the amount of influence for factual questions for the entire ranking is significantly
higher than for reasoning questions. Secondly, there is significantly more variation in the influence
of documents at the same rank across different factual queries. The first finding indicates that,
on average, the models rely on each single document within our set less for reasoning questions
than for factual questions. This could either mean that these documents are relatively uninfluential
for the reasoning queries compared to the entire pretraining data, and a search over more data
would surface more highly influential data. Alternatively, it could mean that the model relies
on individual documents less for reasoning than for factual question answering. The qualitative
analysis points to the second explanation, and we will further discuss this question in the discussion.
The second finding indicates that for the factual questions the model relies on more ‘specific’
and infrequent documents; it is more up to chance whether highly influential documents for
different factual questions are part of the pretraining sample or not. If we compare the result
between models, we find that these two findings are much more pronounced for the 35B model.
This could again be explained by two things; either the 35B model learns more from a single
relevant document for factual questions than the 7B model (indicating higher data efficiency),
or the fact that the factual questions for the 35B are much more ‘niche’ explains this variation.
We dive into this question in Appendix A.9, and find that the effect remains even if we only
look at factual queries that are the same for both model sizes, which points to the first explanation. Finally, we note that when doing a similar characterisation of the most negatively influential
documents (i.e. the negative portion of the ranking), we find very similar results (see Appendix A.9).
**_Finding 5: There is a significant positive correlation between the influence scores of documents_**
**_for queries with the same underlying reasoning task, indicating that these documents are relevant_**
**_for questions requiring the same procedure, but different operations._**
If models are relying on documents that contain procedural knowledge that is applicable to any
query with the same task (e.g. queries that require finding the slope between two points for many
different points), we would expect there to be a significant correlation in the influence scores for
these queries. We calculate the Pearson’s R correlation between all 5 million documents for all
queries, and take the mean for different sets of the queries: factual questions, two-step arithmetic
questions (20 queries for the 7B), slope questions (20 queries for the 7B and 35B), and solving linear
equations (20 queries for the 35B). The result is presented in Figure 3. We find a strongly significant
(p-values all below 4e − 8) positive correlation between queries of the same reasoning type, and a
strongly significant absence of correlation (p-values all around 4e−3) for the factual queries or other
combinations (reasoning queries of different types, for example). This result indicates that for the
same type of reasoning, many documents have a similar influence. Given that the different types of
reasoning queries require applying the same procedure to different numbers, the positive correlation
indicates that the influence scores for reasoning queries pick up on procedural knowledge.
Table 3: Slopes of the fitted functions to the top 500 documents in the influence rankings in log-log
space, separated by query set and whether the model gets the question right or wrong.
**7B (Correct)** **7B (Incorrect)** **35B (Correct)** **35B (Incorrect)**
Reasoning (α) _−0.36 ± 0.03[⋆]_ _−0.33 ± 0.02_ _−0.36 ± 0.04[⋆⋆]_ _−0.38 ± 0.04[⋆]_
Factual (α) _−0.34 ± 0.03_ _−0.34 ± 0.04_ _−0.32 ± 0.05_ _−0.34 ± 0.04_
5 DISCUSSION, LIMITATIONS, AND FUTURE WORK
In this work, we investigate what kind of generalisation strategy two LLMs (with 7B and 35B
parameters respectively) employ when doing reasoning, and contrast it to the strategy used for a task
that requires retrieving factual parametric knowledge. By creating rankings for 80 such questions
over 5 million pre-training documents from most- to least-influential, we make several qualitative
observations. Firstly, the answer to the factual questions often shows up as part of the 500 most
-----
**486**
**487**
**488**
**489**
**490**
**491**
**492**
**493**
**494**
**495**
**496**
**497**
**498**
**499**
**500**
**501**
**502**
**503**
**504**
**505**
**506**
**507**
**508**
**509**
**510**
**511**
**512**
**513**
**514**
**515**
**516**
**517**
**518**
**519**
**520**
**521**
**522**
**523**
**524**
**525**
**526**
**527**
**528**
**529**
**530**
**531**
**532**
**533**
**534**
**535**
**536**
**537**
**538**
**539**
influential documents. By contrast, for the reasoning questions the answer only shows up twice.
When characterising the most influential documents for reasoning, we find that they often contain
procedural knowledge to get to the solution, or similar mathematical operations.
Recall that the influence of a document on a query can be interpreted as the increase in logprobability per nat of query information. In other words, an influence score of 1 indicates that
the document increases the log-likelihood of the query completion by 1 per nat of information in
the query. The distribution of influence over documents tells us something about the type of generalisation strategy the model is likely using; the more documents that collectively contribute to each
nat of query information across the pretraining data (i.e. the more spread out the total influence),
the more documents the model is relying on to produce the completion. Intuitively, the most logical
hypothesis is to expect a steeper power law for factual questions than for reasoning questions. Our
results show weak evidence for the opposite, at least at the part of the ranking that follows a power
law (the top 500 documents). In the context of our broader findings, we explain this result by noise
surfaced by influence functions, which connects to findings from several previous studies (Barshan
et al., 2020; Grosse et al., 2023; Choe et al., 2024), but future work must elucidate whether these irrelevant highly influential document are actually noise, or do in fact strongly influence completions.
When comparing the positive rankings for the reasoning questions beyond the top 500 to those for
the factual questions, we observe several things; a lower total magnitude of influence of documents
on reasoning traces on average, and much less variance in the total amount of influence present.
This indicates that for a random set of pretraining data that is i.i.d. as the pretraining data, whether
documents have a strong influence on factual questions is more volatile than for reasoning questions,
which indicates reliance on more specialised and infrequent knowledge for the former. Further, we
find a significant correlation of all influence scores for different questions that require similar types
of reasoning, indicating that many documents impact different reasoning questions similarly. Depending on whether the results look similar for the entire pretraining set, this could mean that the
distribution of influence for reasoning questions is indeed more spread out for reasoning questions
than for factual questions. If the opposite is true, and there are in fact documents in the broader
pretraining set that are more highly influential for reasoning than the ones we surface, that would
result in the opposite conclusion, namely that LLMs rely on very sparse and infrequent documents
for reasoning. If this is the case, it means we are effectively looking at a set of relatively uninfluential
documents for reasoning. We would argue that this is the more unlikely explanation for two reasons:
(1) the qualitative analysis shows that the influential data for the reasoning questions is intuitively
highly relevant, and (2) the correlation of influence scores for the different reasoning tasks is highly
significant. If the surfaced documents are relatively uninfluential compared to other pretraining data,
we would expect less regular patterns of influence for individual documents than we observe. Given
the complex learning dynamics underlying LLMs, it is difficult to say much conclusively about the
distribution of influence of individual queries one should expect for robust generalisation, however,
taken together holistically we argue the results indicate a generalisation strategy that is unlike retrieval from parametric knowledge. Regardless, the nature of interpretability research such as the
work presented in this manuscript is that all we can do is provide evidence, and not proof. We
discuss limitations of influence functions more broadly in Appendix A.7.
This work spurs interesting avenues for future work, many of which can be explored with the experiments done here. Firstly, what about code data makes it influential for reasoning? What kind of
code is positively influential, and what kind negatively? Another interesting avenue for future work
would be exploring using our findings for pretraining data selection. We find that many documents
are similarly influential for different questions requiring the same type of step-by-step reasoning.
When we look at such documents that influence correct reasoning traces, what characterises them,
and what can we learn about the kind of data that produces more robust reasoning? Further, we find
evidence for crosslingual retrieval of parametric knowledge that we did not explore systematically.
With this work, we do not claim to say contamination is not an issue, or that LLM reasoning is not
brittle and reliant on pretraining statistics. All we showed is that in principle it seems to be possible
for LLMs to produce reasoning traces using a generalisation strategy that combines information
from many abstractly related documents, as opposed to doing a form of retrieval. This is not to say
that there are no cases of LLM reasoning where the model is in fact doing retrieval, on the contrary,
models can be overfit to contaminated data if it appears often enough in the training data. Another
interesting avenue for future work would be identifying an overfit form of reasoning in LLMs and
determining whether the distribution of influence over rankings looks more like the factual questions.
-----
**540**
**541**
**542**
**543**
**544**
**545**
**546**
**547**
**548**
**549**
**550**
**551**
**552**
**553**
**554**
**555**
**556**
**557**
**558**
**559**
**560**
**561**
**562**
**563**
**564**
**565**
**566**
**567**
**568**
**569**
**570**
**571**
**572**
**573**
**574**
**575**
**576**
**577**
**578**
**579**
**580**
**581**
**582**
**583**
**584**
**585**
**586**
**587**
**588**
**589**
**590**
**591**
**592**
**593**
REPRODUCIBILITY STATEMENT
Although this work is based on proprietary models and pretraining data, we make the following
efforts for reproducibility. For one of the models we use (the 35B model), the final-stage model
(further trained after SFT) is publicly available on HuggingFace[4]. We provide all queries, completions, and keywords in the supplemental material. Additionally, we verify that the influence scores
generated with our internal codebase correlate with a Pearson’s R of more than 0.99 with a public
implementation of EK-FAC influence functions (see Appendix A.2). Further, we provide details on
hyperparameters for every experiment we have done at the relevant sections, as well as the prompts
used to find answers to the reasoning questions and characterise the relationship between the querydocument pairs (Appendix A.5 and A.6 respectively).
REFERENCES
Viraat Aryabumi, Yixuan Su, Raymond Ma, Adrien Morisot, Ivan Zhang, Acyr Locatelli, Marzieh
Fadaee, Ahmet Ust¨[¨] un, and Sara Hooker. To code, or not to code? exploring impact of code in
[pre-training, 2024. URL https://arxiv.org/abs/2408.10914.](https://arxiv.org/abs/2408.10914)
Juhan Bae, Wu Lin, Jonathan Lorraine, and Roger Grosse. Training data attribution via approximate
[unrolled differentiation, 2024. URL https://arxiv.org/abs/2405.12186.](https://arxiv.org/abs/2405.12186)
Elnaz Barshan, Marc-Etienne Brunet, and Gintare Karolina Dziugaite. Relatif: Identifying explanatory training samples via relative influence. In Silvia Chiappa and Roberto Calandra (eds.),
_Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics,_
volume 108 of Proceedings of Machine Learning Research, pp. 1899–1909. PMLR, 26–28 Aug
[2020. URL https://proceedings.mlr.press/v108/barshan20a.html.](https://proceedings.mlr.press/v108/barshan20a.html)
Trenton Bricken, Adly Templeton, Joshua Batson, Brian Chen, Adam Jermyn, Tom Conerly, Nick Turner, Cem Anil, Carson Denison, Amanda Askell, Robert Lasenby, Yifan Wu,
Shauna Kravec, Nicholas Schiefer, Tim Maxwell, Nicholas Joseph, Zac Hatfield-Dodds, Alex
Tamkin, Karina Nguyen, Brayden McLean, Josiah E Burke, Tristan Hume, Shan Carter,
Tom Henighan, and Christopher Olah. Towards monosemanticity: Decomposing language
models with dictionary learning. _Transformer Circuits Thread, 2023._ https://transformercircuits.pub/2023/monosemantic-features/index.html.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh,
Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz
Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec
Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. In
H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), Advances in Neu_ral Information Processing Systems, volume 33, pp. 1877–1901. Curran Associates, Inc.,_
2020. [URL https://proceedings.neurips.cc/paper_files/paper/2020/](https://proceedings.neurips.cc/paper_files/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf)
[file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf.](https://proceedings.neurips.cc/paper_files/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf)
Sang Keun Choe, Hwijeen Ahn, Juhan Bae, Kewen Zhao, Minsoo Kang, Youngseog Chung, Adithya
Pratapa, Willie Neiswanger, Emma Strubell, Teruko Mitamura, Jeff Schneider, Eduard Hovy,
Roger Grosse, and Eric Xing. What is your data worth to gpt? llm-scale data valuation with
[influence functions, 2024. URL https://arxiv.org/abs/2405.13954.](https://arxiv.org/abs/2405.13954)
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser,
Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John
Schulman. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168,
2021.
[Dask Development Team. Dask: Library for dynamic task scheduling, 2016. URL http://](http://dask.pydata.org)
[dask.pydata.org.](http://dask.pydata.org)
4https://huggingface.co/CohereForAI/c4ai-command-r-v01
-----
**594**
**595**
**596**
**597**
**598**
**599**
**600**
**601**
**602**
**603**
**604**
**605**
**606**
**607**
**608**
**609**
**610**
**611**
**612**
**613**
**614**
**615**
**616**
**617**
**618**
**619**
**620**
**621**
**622**
**623**
**624**
**625**
**626**
**627**
**628**
**629**
**630**
**631**
**632**
**633**
**634**
**635**
**636**
**637**
**638**
**639**
**640**
**641**
**642**
**643**
**644**
**645**
**646**
**647**
Chunyuan Deng, Yilun Zhao, Xiangru Tang, Mark Gerstein, and Arman Cohan. Benchmark
probing: Investigating data leakage in large language models. In NeurIPS 2023 Workshop
_on Backdoors in Deep Learning - The Good, the Bad, and the Ugly, 2024._ [URL https:](https://openreview.net/forum?id=a34bgvner1)
[//openreview.net/forum?id=a34bgvner1.](https://openreview.net/forum?id=a34bgvner1)
Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner.
DROP: A reading comprehension benchmark requiring discrete reasoning over paragraphs. In
_Proc. of NAACL, 2019._
Nelson Elhage, Tristan Hume, Catherine Olsson, Nicholas Schiefer, Tom Henighan, Shauna
Kravec, Zac Hatfield-Dodds, Robert Lasenby, Dawn Drain, Carol Chen, Roger Grosse,
Sam McCandlish, Jared Kaplan, Dario Amodei, Martin Wattenberg, and Christopher Olah.
Toy models of superposition. _Transformer Circuits Thread, 2022._ https://transformercircuits.pub/2022/toymodel/index.html.
Thomas George, C´esar Laurent, Xavier Bouthillier, Nicolas Ballas, and Pascal Vincent. Fast
approximate natural gradient descent in a kronecker factored eigenbasis. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.),
_Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc.,_
2018. URL [https://proceedings.neurips.cc/paper_files/paper/2018/](https://proceedings.neurips.cc/paper_files/paper/2018/file/48000647b315f6f00f913caa757a70b3-Paper.pdf)
[file/48000647b315f6f00f913caa757a70b3-Paper.pdf.](https://proceedings.neurips.cc/paper_files/paper/2018/file/48000647b315f6f00f913caa757a70b3-Paper.pdf)
Roger Grosse, Juhan Bae, Cem Anil, Nelson Elhage, Alex Tamkin, Amirhossein Tajdini, Benoit
Steiner, Dustin Li, Esin Durmus, Ethan Perez, Evan Hubinger, Kamil˙e Lukoˇsi¯ut˙e, Karina Nguyen,
Nicholas Joseph, Sam McCandlish, Jared Kaplan, and Samuel R. Bowman. Studying large language
[model generalization with influence functions, 2023. URL https://arxiv.org/abs/2308.](https://arxiv.org/abs/2308.03296)
[03296.](https://arxiv.org/abs/2308.03296)
Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio C´esar Teodoro Mendes, Allie Del Giorno, Sivakanth
Gopi, Mojan Javaheripi, Piero Kauffmann, Gustavo de Rosa, Olli Saarikivi, Adil Salim, Shital Shah,
Harkirat Singh Behl, Xin Wang, S´ebastien Bubeck, Ronen Eldan, Adam Tauman Kalai, Yin Tat Lee,
[and Yuanzhi Li. Textbooks are all you need, 2023. URL https://arxiv.org/abs/2306.](https://arxiv.org/abs/2306.11644)
[11644.](https://arxiv.org/abs/2306.11644)
N. Halko, P. G. Martinsson, and J. A. Tropp. Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions. SIAM Review, 53(2):217–288, 2011.
[doi: 10.1137/090771806. URL https://doi.org/10.1137/090771806.](https://doi.org/10.1137/090771806)
Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob
Steinhardt. Measuring massive multitask language understanding. Proceedings of the International
_Conference on Learning Representations (ICLR), 2021._
Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza
Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, Thomas
Hennigan, Eric Noland, Katherine Millican, George van den Driessche, Bogdan Damoc, Aurelia Guy, Simon Osindero, Kar´en Simonyan, Erich Elsen, Oriol Vinyals, Jack Rae, and Laurent Sifre. An empirical analysis of compute-optimal large language model training. In
S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh (eds.), Advances
_in Neural Information Processing Systems, volume 35, pp. 30016–30030. Curran Associates,_
[Inc., 2022. URL https://proceedings.neurips.cc/paper_files/paper/2022/](https://proceedings.neurips.cc/paper_files/paper/2022/file/c1e2faff6f588870935f114ebe04a3e5-Paper-Conference.pdf)
[file/c1e2faff6f588870935f114ebe04a3e5-Paper-Conference.pdf.](https://proceedings.neurips.cc/paper_files/paper/2022/file/c1e2faff6f588870935f114ebe04a3e5-Paper-Conference.pdf)
Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott
Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models, 2020.
[URL https://arxiv.org/abs/2001.08361.](https://arxiv.org/abs/2001.08361)
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In International
_Conference on Learning Representations (ICLR), San Diega, CA, USA, 2015._
Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. RACE: Large-scale ReAding
comprehension dataset from examinations. In Martha Palmer, Rebecca Hwa, and Sebastian Riedel
(eds.), Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing,
-----
**648**
**649**
**650**
**651**
**652**
**653**
**654**
**655**
**656**
**657**
**658**
**659**
**660**
**661**
**662**
**663**
**664**
**665**
**666**
**667**
**668**
**669**
**670**
**671**
**672**
**673**
**674**
**675**
**676**
**677**
**678**
**679**
**680**
**681**
**682**
**683**
**684**
**685**
**686**
**687**
**688**
**689**
**690**
**691**
**692**
**693**
**694**
**695**
**696**
**697**
**698**
**699**
**700**
**701**
pp. 785–794, Copenhagen, Denmark, September 2017. Association for Computational Linguistics.
[doi: 10.18653/v1/D17-1082. URL https://aclanthology.org/D17-1082.](https://aclanthology.org/D17-1082)
Kyle Mahowald, Anna Ivanova, Idan Blank, Nancy Kanwisher, Joshua Tenenbaum, and Evelina Fedorenko. Dissociating language and thought in large language models. Trends in Cognitive Sciences,
28, 03 2024. doi: 10.1016/j.tics.2024.01.011.
R. Thomas McCoy, Shunyu Yao, Dan Friedman, Matthew Hardy, and Thomas L. Griffiths. Embers of
autoregression: Understanding large language models through the problem they are trained to solve,
[2023. URL https://arxiv.org/abs/2309.13638.](https://arxiv.org/abs/2309.13638)
Sean McLeish, Arpit Bansal, Alex Stein, Neel Jain, John Kirchenbauer, Brian R. Bartoldson, Bhavya
Kailkhura, Abhinav Bhatele, Jonas Geiping, Avi Schwarzschild, and Tom Goldstein. Transformers
[can do arithmetic with the right embeddings, 2024. URL https://arxiv.org/abs/2405.](https://arxiv.org/abs/2405.17399)
[17399.](https://arxiv.org/abs/2405.17399)
Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. Pointer sentinel mixture models, 2016.
Melanie Mitchell and David C. Krakauer. The debate over understanding in ai’s large language models.
_Proceedings of the National Academy of Sciences, 120(13):e2215907120, 2023. doi: 10.1073/pnas._
[2215907120. URL https://www.pnas.org/doi/abs/10.1073/pnas.2215907120.](https://www.pnas.org/doi/abs/10.1073/pnas.2215907120)
Catherine Olsson, Nelson Elhage, Neel Nanda, Nicholas Joseph, Nova DasSarma, Tom Henighan,
Ben Mann, Amanda Askell, Yuntao Bai, Anna Chen, Tom Conerly, Dawn Drain, Deep Ganguli, Zac Hatfield-Dodds, Danny Hernandez, Scott Johnston, Andy Jones, Jackson Kernion, Liane
Lovitt, Kamal Ndousse, Dario Amodei, Tom Brown, Jack Clark, Jared Kaplan, Sam McCandlish,
and Chris Olah. In-context learning and induction heads. _Transformer Circuits Thread, 2022._
https://transformer-circuits.pub/2022/in-context-learning-and-induction-heads/index.html.
Garima Pruthi, Frederick Liu, Satyen Kale, and Mukund Sundararajan. Estimating training data influence by tracing gradient descent. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin
(eds.), Advances in Neural Information Processing Systems, volume 33, pp. 19920–19930. Cur[ran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper_files/](https://proceedings.neurips.cc/paper_files/paper/2020/file/e6385d39ec9394f2f3a354d9d2b88eec-Paper.pdf)
[paper/2020/file/e6385d39ec9394f2f3a354d9d2b88eec-Paper.pdf.](https://proceedings.neurips.cc/paper_files/paper/2020/file/e6385d39ec9394f2f3a354d9d2b88eec-Paper.pdf)
Yasaman Razeghi, Robert L Logan IV, Matt Gardner, and Sameer Singh. Impact of pretraining
term frequencies on few-shot numerical reasoning. In Yoav Goldberg, Zornitsa Kozareva, and Yue
Zhang (eds.), Findings of the Association for Computational Linguistics: EMNLP 2022, pp. 840–
854, Abu Dhabi, United Arab Emirates, December 2022. Association for Computational Linguis[tics. doi: 10.18653/v1/2022.findings-emnlp.59. URL https://aclanthology.org/2022.](https://aclanthology.org/2022.findings-emnlp.59)
[findings-emnlp.59.](https://aclanthology.org/2022.findings-emnlp.59)
Aaditya K Singh, Ted Moskovitz, Felix Hill, Stephanie C.Y. Chan, and Andrew M Saxe. What needs
to go right for an induction head? a mechanistic study of in-context learning circuits and their
[formation. In Forty-first International Conference on Machine Learning, 2024. URL https:](https://openreview.net/forum?id=O8rrXl71D5)
[//openreview.net/forum?id=O8rrXl71D5.](https://openreview.net/forum?id=O8rrXl71D5)
Adly Templeton, Tom Conerly, Jonathan Marcus, Jack Lindsey, Trenton Bricken, Brian Chen, Adam
Pearce, Craig Citro, Emmanuel Ameisen, Andy Jones, Hoagy Cunningham, Nicholas L Turner,
Callum McDougall, Monte MacDiarmid, C. Daniel Freeman, Theodore R. Sumers, Edward Rees,
Joshua Batson, Adam Jermyn, Shan Carter, Chris Olah, and Tom Henighan. Scaling monosemanticity: Extracting interpretable features from claude 3 sonnet. Transformer Circuits Thread, 2024.
[URL https://transformer-circuits.pub/2024/scaling-monosemanticity/](https://transformer-circuits.pub/2024/scaling-monosemanticity/index.html)
[index.html.](https://transformer-circuits.pub/2024/scaling-monosemanticity/index.html)
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay
Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu,
Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn,
Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel
Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee,
-----
**702**
**703**
**704**
**705**
**706**
**707**
**708**
**709**
**710**
**711**
**712**
**713**
**714**
**715**
**716**
**717**
**718**
**719**
**720**
**721**
**722**
**723**
**724**
**725**
**726**
**727**
**728**
**729**
**730**
**731**
**732**
**733**
**734**
**735**
**736**
**737**
**738**
**739**
**740**
**741**
**742**
**743**
**744**
**745**
**746**
**747**
**748**
**749**
**750**
**751**
**752**
**753**
**754**
**755**
Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra,
Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi,
Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh
Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen
Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey
Edunov, and Thomas Scialom. Llama 2: Open foundation and fine-tuned chat models, 2023. URL
[https://arxiv.org/abs/2307.09288.](https://arxiv.org/abs/2307.09288)
Tomer Ullman. Large language models fail on trivial alterations to theory-of-mind tasks, 2023. URL
[https://arxiv.org/abs/2302.08399.](https://arxiv.org/abs/2302.08399)
Boshi Wang, Xiang Yue, Yu Su, and Huan Sun. Grokked transformers are implicit reasoners: A
[mechanistic journey to the edge of generalization, 2024. URL https://arxiv.org/abs/](https://arxiv.org/abs/2405.15071)
[2405.15071.](https://arxiv.org/abs/2405.15071)
Taylor Webb, Keith Holyoak, and Hongjing Lu. Emergent analogical reasoning in large language
models. Nature Human Behaviour, 7:1–16, 07 2023. doi: 10.1038/s41562-023-01659-w.
Taylor Webb, Keith J. Holyoak, and Hongjing Lu. Evidence from counterfactual tasks supports emer[gent analogical reasoning in large language models, 2024. URL https://arxiv.org/abs/](https://arxiv.org/abs/2404.13070)
[2404.13070.](https://arxiv.org/abs/2404.13070)
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, brian ichter, Fei Xia, Ed H. Chi, Quoc V
Le, and Denny Zhou. Chain of thought prompting elicits reasoning in large language models. In
Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho (eds.), Advances in Neu_[ral Information Processing Systems, 2022. URL https://openreview.net/forum?id=](https://openreview.net/forum?id=_VjQlMeSB_J)_
[_VjQlMeSB_J.](https://openreview.net/forum?id=_VjQlMeSB_J)
Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Aky¨urek, Boyuan Chen, Bailin Wang, Najoung Kim,
Jacob Andreas, and Yoon Kim. Reasoning or reciting? exploring the capabilities and limitations of
language models through counterfactual tasks. In Kevin Duh, Helena Gomez, and Steven Bethard
(eds.), Proceedings of the 2024 Conference of the North American Chapter of the Association for
_Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pp. 1819–_
1862, Mexico City, Mexico, June 2024. Association for Computational Linguistics. doi: 10.18653/
[v1/2024.naacl-long.102. URL https://aclanthology.org/2024.naacl-long.102.](https://aclanthology.org/2024.naacl-long.102)
Shuo Yang, Wei-Lin Chiang, Lianmin Zheng, Joseph E. Gonzalez, and Ion Stoica. Rethinking
[benchmark and contamination for language models with rephrased samples, 2023. URL https:](https://arxiv.org/abs/2311.04850)
[//arxiv.org/abs/2311.04850.](https://arxiv.org/abs/2311.04850)
A APPENDIX
In Appendix A.1 we discuss the counterfactual re-training experiments that motivate our use of EKFAC influence functions for estimating the effect of pretraining data on the accuracy of downstream
behaviour. We describe in more detail how we use influence functions at scale in Appendix A.2,
documenting how we estimate the Hessian, how we store many query gradients in memory (each
having the same memory complexity as the entire model), and how we sample from the pretraining
distribution. Then, in Appendix A.3, we show examples of the reasoning sets that we did not show
examples for in the main body of this manuscript. In Appendix A.4 we discuss how we create
keywords for each query in order to find the answer in the top documents, and in the sections directly
after that, Appendix A.5 and A.6, we give the prompts we used to allow Command R+ to search
for answers in the top 500 documents for each query, as well as characterise their relationship.
Finally, in Appendix A.7 we discuss limitations specific to influence functions, and in Appendix
A.8 and A.9 we provide all additional results referenced in the main body of text (qualitative and
quantitative respectively).
A.1 COUNTERFACTUAL RE-TRAINING EXPERIMENTS WITH INFLUENCE FUNCTIONS
We use EK-FAC influence functions to approximate the counterfactual question: which documents
from pretraining have a causal effect on the accuracy of completions of a trained model. In this
-----
**756**
**757**
**758**
**759**
**760**
**761**
**762**
**763**
**764**
**765**
**766**
**767**
**768**
**769**
**770**
**771**
**772**
**773**
**774**
**775**
**776**
**777**
**778**
**779**
**780**
**781**
**782**
**783**
**784**
**785**
**786**
**787**
**788**
**789**
**790**
**791**
**792**
**793**
**794**
**795**
**796**
**797**
**798**
**799**
**800**
**801**
**802**
**803**
**804**
**805**
**806**
**807**
**808**
**809**
Table 4: Wikitext remove top influential
k → 50 100 150 200 250 300
Random 22.09 ± 0.02 22.12 ± 0.02 22.10 ± 0.02 22.20 ± 0.06 22.19 ± 0.05 22.15 ± 0.05
TracIn 22.16 ± 0.02[⋆⋆] 22.22 ± 0.02[⋆⋆] 22.25 ± 0.01[⋆⋆] 22.35 ± 0.03[⋆⋆] 22.42 ± 0.01[⋆⋆] 22.45 ± 0.02[⋆⋆]
IF (ours) 22.49 ± 0.02[⋆⋆] 22.66 ± 0.02[⋆⋆] 22.73 ± 0.02[⋆⋆] 22.88 ± 0.01[⋆⋆] 22.97 ± 0.02[⋆⋆] 23.05 ± 0.05[⋆⋆]
Table 5: Wikitext remove bottom influential
k → 50 100 150 200 250 300
Random 27.40 ± 0.08 26.24 ± 0.10 25.62 ± 0.15 25.22 ± 0.10 25.04 ± 0.12 24.85 ± 0.10
TracIn 26.73 ± 0.04[⋆⋆] 25.48 ± 0.05[⋆⋆] 24.86 ± 0.02[⋆⋆] 24.36 ± 0.04[⋆⋆] 24.16 ± 0.05[⋆⋆] 23.94 ± 0.03[⋆⋆]
IF (ours) 25.96 ± 0.04[⋆⋆] 24.78 ± 0.05[⋆⋆] 23.95 ± 0.03[⋆⋆] 23.52 ± 0.03[⋆⋆] 23.46 ± 0.03[⋆⋆] 23.32 ± 0.04[⋆⋆]
section, we aim to motivate two aspects of this choice; the fact that influence functions are designed to estimate the effect on continuous differentiable functions, like the log-likelihood, and not
on the accuracy. Secondly, we motivate the need for estimating the second-order information of the
pretraining objective using EK-FAC, which is very computationally expensive. We present four different experiments in this section, which show that indeed the influence of documents as determined
by influence functions also estimate the effect on downstream task accuracy, as well as the benefits
from estimating second order information over simply using first-order gradient information.
The pipeline for each of these experiments is similar; we take a pretrained model, we fine-tune it on
some dataset, and evaluate it on 50 validation examples with a metric (perplexity or accuracy). We
then use the fine-tuned weights to calculate the influence of the documents in the dataset used for
fine-tuning on the set of 50 validation questions with two methods: EK-FAC influence functions and
TracIn (Pruthi et al., 2020). Subsequently, we use those two methods to remove the k most positively
influential documents from the fine-tuning dataset, as well as randomly selecting k documents as a
baseline, and fine-tune the original pretrained model five times on each new fine-tuning dataset
created (for different values for k). We then calculate the perplexity or accuracy on the validation
questions used to calculate the influence, and see how it changed. The more it changed, the more
the documents indeed influence the relevant metric (i.e. perplexity or accuracy). Note that for n
different values for k, this requires fine-tuning 3 ∗ 5 ∗ _n models, five times for each of the three_
methods of removing documents from the training set.
We start by motivating the use of EK-FAC influence functions over simple similarity information
between document and query gradients. In our setup, where we only have access to the final checkpoint of pretraining, a dot-product between the query and document gradient effectively boils down
to a method for estimating influence of documents on queries called TracIn (Pruthi et al., 2020).
With access to multiple checkpoints, TracIn uses gradient information from all of them, accounting for the learning rate used at that point in training. However, we only use the final checkpoint
and hence taking into account learning rate only changes scores by a constant. We take GPT-2small (124M) from HuggingFace[5], and fine-tune it for three epochs with next-word prediction on
Wikitext-2 (Merity et al., 2016). We use Adam optimizer (Kingma & Ba, 2015) with default parameters (b1 0.9, b2 0.999, eps 1e-8, additive weight decay 0.01). The results can be found in Figure
4 and Table 4, showing that removing documents using EK-FAC influence functions has a significantly larger effect on downstream perplexity for all values of k. We do the exact same experiment
but instead remove the most negatively influential documents, and see that instead the perplexity
decreases significantly more for EK-FAC influence functions (Figure 4 and Table 5).
Next, we turn to motivating the use of EK-FAC influence functions in estimating the effect of documents on downstream accuracy of model generations. To this end, we look at two different datasets:
DROP (Dua et al., 2019) and RACE (Lai et al., 2017). DROP is a reading comprehension dataset requiring different skills like subtraction, addition, coreference resolution, counting, and other skills.
The model needs to generate an answer that often consists of one or a few words. We allow the
fine-tuned models to generate answers to the questions freely, and evaluate based on exact match. In
this experiment, we use a 7B model. We randomly select a subset of 8000 examples for fine-tuning,
[5https://huggingface.co/](https://huggingface.co/)
-----
**810**
**811**
**812**
**813**
**814**
**815**
**816**
**817**
**818**
**819**
**820**
**821**
**822**
**823**
**824**
**825**
**826**
**827**
**828**
**829**
**830**
**831**
**832**
**833**
**834**
**835**
**836**
**837**
**838**
**839**
**840**
**841**
**842**
**843**
**844**
**845**
**846**
**847**
**848**
**849**
**850**
**851**
**852**
**853**
**854**
**855**
**856**
**857**
**858**
**859**
**860**
**861**
**862**
**863**
(a) (b)
Figure 4: (a) Counterfactual retraining experiments on Wikitext-2. We finetuned GPT-2 (124M)
on Wikitext-2 and use three different methods to remove training examples from the training set:
randomly, TracIn, and Influence Functions (IF). For each number of samples removed we finetune
the base model five times with different training data ordering, the variance over these runs is represented by the error bars. Each point on the plot is the average perplexity achieved by the five models
after fine-tuning on the augmented dataset. We find that influence functions can find examples that
impact the perplexity significantly more than baselines. (b) We repeat the same experiment as in (a),
but retain top influential queries instead (removing most negatively influential).
Table 6: Counterfactual re-training accuracies on DROP (free generation of answers). We use three
different methods (random, TracIn, influence functions) to remove k datapoints, and re-train a model
on the resulting dataset. Each number is the mean over five re-training runs with different data
ordering. ⋆ indicates significantly lower than random with a p-value below 0.1 and ⋆⋆ with a pvalue below 0.05. The underlined means are the lowest.
k → 500 1000 1500 2000
Random 0.61 ± 0.05 0.60 ± 0.03 0.56 ± 0.05 0.57 ± 0.06
TracIn 0.55 ± 0.03[⋆] 0.49 ± 0.02[⋆⋆] 0.44 ± 0.04[⋆⋆] 0.43 ± 0.06[⋆⋆]
IF (ours) 0.51 ± 0.03[⋆⋆] 0.50 ± 0.04[⋆⋆] 0.40 ± 0.05[⋆⋆] 0.38 ± 0.05[⋆⋆]
and use the procedure described above to perform counterfactual experiments. We use Adam optimizer again, with the same hyperparameters as for the above experiment: b1 0.9, b2 0.999, eps
1e-8, additive weight decay 0.01, but only train for one epoch. The results can be found in the left
panel of Figure 5 as well as in Table 6. We find that EK-FAC influence functions are succesful in
selecting data points that impact downstream accuracy, much more so than randomly removing the
same amount of training data. For most k (all but k = 1000), EK-FAC influence functions also have
a significantly stronger effect on accuracy than TracIn, but the difference is less large. We apply the
exact same procedure to the RACE dataset, except now we keep 10k examples (empirically found
to lead to the least overfitting when fine-tuning). Further, RACE is a multiple-choice dataset, so we
allow the model to generate a single token indicating the choice, and calculate the accuracy. The
results can be seen in Figure 5 and Table 7. Again, the finding is similar; EK-FAC influence functions surface documents that have a stronger effect on accuracy than TracIn for all but one value
of k, and for all values of k than randomly removing documents. There is a large variance in the
results for all methods though, which we attribute to the fact that the model sometimes seems to
overfit to the fine-tuning data. Further, the reason why the difference between TracIn and EK-FAC
influence functions is much larger in the perplexity experiments than in the accuracy experiments
could be attributed to the fact that we only fine-tune for one epoch in the accuracy experiments (as
more cause overfitting). EK-FAC influence functions differ from TracIn in that they estimate second
order information, which becomes more important with more training steps. An interesting avenue
for future work is to do counterfactual re-training experiments like these on a subset of pretraining
data for a 7B model, but this is incredibly computationally expensive.
-----
**864**
**865**
**866**
**867**
**868**
**869**
**870**
**871**
**872**
**873**
**874**
**875**
**876**
**877**
**878**
**879**
**880**
**881**
**882**
**883**
**884**
**885**
**886**
**887**
**888**
**889**
**890**
**891**
**892**
**893**
**894**
**895**
**896**
**897**
**898**
**899**
**900**
**901**
**902**
**903**
**904**
**905**
**906**
**907**
**908**
**909**
**910**
**911**
**912**
**913**
**914**
**915**
**916**
**917**
(a) Counterfactual retraining experiments on reading comprehension questions. We finetuned Cohere
Command 2 (7B) on a subset of the DROP training
set (8k examples) and use three different methods to
remove training examples from the training set: randomly, TracIn, and Influence Functions (IF). For each
number of samples removed we finetune the base
model five times with different training data ordering, the variance over these runs is represented by the
error bars. Each point in the plot is the average accuracy achieved by the five models after fine-tuning on
the augmented dataset. We find that influence functions can find examples that impact the accuracy significantly more than baselines, although only slightly
more than TracIn.
(b) Counterfactual retraining experiments on
multiple-choice reasoning data. We finetuned Cohere
Command 2 (7B) on a subset of the RACE training
set (10k examples) and use three different methods
to remove training examples from the training set:
randomly, TracIn, and Influence Functions (IF). For
each number of samples removed we finetune the
base model five times with different training data
ordering, the variance over these runs is represented
by the error bars. Each point in the plot is the
average accuracy achieved by the five models after
fine-tuning on the augmented dataset. We find that
influence functions can find examples that impact the
accuracy significantly more than baselines, although
there is some variance in the results.
Figure 5: Side-by-side comparison of counterfactual retraining experiments on reading comprehension benchmark DROP (a) and the multiple-choice reasoning dataset RACE (b).
Table 7: Counterfactual re-training accuracies on RACE (multiple-choice). We use three different
methods (random, TracIn, influence functions) to remove k datapoints, and re-train a model on the
resulting dataset. Each number is the mean over five re-training runs with different data ordering.
_⋆_ indicates significantly lower than random with a p-value below 0.1 and ⋆⋆ with a p-value below
0.05. The underlined means are the lowest.
k → 1000 1500 2000 2500
Random 0.85 ± 0.04 0.83 ± 0.03 0.82 ± 0.04 0.81 ± 0.04
TracIn 0.84 ± 0.01 0.78 ± 0.03[⋆⋆] 0.80 ± 0.03 0.79 ± 0.04
IF (ours) 0.80 ± 0.04[⋆] 0.76 ± 0.05[⋆⋆] 0.74 ± 0.04[⋆⋆] 0.74 ± 0.05[⋆]
A.2 EK-FAC INFLUENCE FUNCTIONS
The code we use for EK-FAC influence functions at scale is a part of a larger internal infrastructure,
and hence cannot be released publicly. However, we base our code on the public GitHub repository
[https://github.com/pomonam/kronfluence. The way we implement estimation of the](https://github.com/pomonam/kronfluence)
Hessian is the same as in that codebase, except for a few changes to make it tractable, which we
discuss in more detail below. Further, we compare the results produced by our implementation with
the results using the public implementation. We do this by fine-tuning GPT-2 (124M) on Wikitext-2
using internal infrastructure, and calculating influence scores with both code bases. We find that
the results correlate very strongly (with a Pearson’s R of more than 0.99). Below we provide more
details of the design choices and hyperparameters used in our implementation.
**Query-batching and approximation As mentioned in the main text, we approximate query gradi-**
ents using approximate SVD (Halko et al., 2011). We use the default parameters for this algorithm,
which can be found in the Dask documentation (Dask Development Team, 2016).
-----
**918**
**919**
**920**
**921**
**922**
**923**
**924**
**925**
**926**
**927**
**928**
**929**
**930**
**931**
**932**
**933**
**934**
**935**
**936**
**937**
**938**
**939**
**940**
**941**
**942**
**943**
**944**
**945**
**946**
**947**
**948**
**949**
**950**
**951**
**952**
**953**
**954**
**955**
**956**
**957**
**958**
**959**
**960**
**961**
**962**
**963**
**964**
**965**
**966**
**967**
**968**
**969**
**970**
**971**
**Sampling from the Pretraining Data. It is intractable to calculate influence for the entire pretrain-**
ing data, so we sample a set of 5 million documents. To this end, we loop over the training data as
seen by the models in order, and randomly sample 6 examples from each batch. This ensures that
the pretraining sample we use is both similarly distributed in terms of what kind of data the model
sees, as well as when it has seen it during pretraining.
**Estimating EK-FAC. To estimate the EK-FAC matrices, we sample 100000 documents from pre-**
training in the same manner as described above. We use the same samples to estimate the EK-FAC
for the 7B as for the 35B. For both models, we use a damping factor of 0.1 (see Grosse et al. (2023)
for details on what the damping factor is). Further, part of estimating the EK-FAC is an eigendecomposition on the EK-FAC matrices. We use the same approximation as empirically motivated
in (Grosse et al., 2023), namely block-diagonal approximation. For the 7B, we use 2 blocks, and
for the 35B, we use 4. The block-diagonal approximation is not part of the public codebase, but
simply amounts to dividing the matrices in n blocks (where n is 2 and 4 in our case), zero-ing out
the remaining entries, and taking the eigendecomposition of each block individually. After, these
blocks are patched back together again into the original size matrix, which will be further processed
as in the public codebase.
-----
**972**
**973**
**974**
**975**
**976**
**977**
**978**
**979**
**980**
**981**
**982**
**983**
**984**
**985**
**986**
**987**
**988**
**989**
**990**
**991**
**992**
**993**
**994**
**995**
**996**
**997**
**998**
**999**
**1000**
**1001**
**1002**
**1003**
**1004**
**1005**
**1006**
**1007**
**1008**
**1009**
**1010**
**1011**
**1012**
**1013**
**1014**
**1015**
**1016**
**1017**
**1018**
**1019**
**1020**
**1021**
**1022**
**1023**
**1024**
**1025**
Table 8: Example from the reasoning set that involves calculating the slope of a line going through
two points. We have 20 examples like these on different points. The completion is by the 7B model.
**Reasoning Query Set (slopes)**
_Prompt_
What is the slope of the line passing through the points (93, 28) and (74, 47)?
Think step-by-step.
_Completion_
To find the slope of the line passing through the points (93, 28) and (74, 47), we can use the following steps:
1. Identify the coordinates of the two points:
- Point A: (93, 28)
- Point B: (74, 47)
2. Calculate the difference in x-coordinates between the two points:
- Difference in x-coordinates: x2 - x1 = 74 - 93 = -19
3. Calculate the difference in y-coordinates between the two points:
- Difference in y-coordinates: y2 - y1 = 47 - 28 = 19
4. Determine the ratio of the difference in y-coordinates to the difference in x-coordinates:
- Ratio: 19 / -19 = 1 / -1
5. Multiply the ratio by -1 to reverse the direction of the line:
- Slope: -1 / 1 = -1
Therefore, the slope of the line passing through the points (93, 28) and (74, 47) is -1.
Table 9: Example from the reasoning set that requires solving for x in a linear equation. We have
twenty other examples like these on different numbers, only for the 35B model. The completion is
by the 35B model.
**Reasoning Query Set (linear)**
_Prompt_
Solve for x: 5x + 21 = 91
Think step-by-step.
_Completion_
Here are the steps to solve for x:
1. Subtract 21 from both sides of the equation:
5x + 21 - 21 = 91 - 21
5x = 70
2. Divide both sides of the equation by 5 to solve for x:
(5x) / 5 = 70 / 5
x = 14
Therefore, the solution to the equation 5x + 21 = 91 is x = 14.
A.3 QUERY SETS
In this section, we show an example of the other two types of reasoning present in the reasoning
query sets, in Table 8 and 9.
-----
**1026**
**1027**
**1028**
**1029**
**1030**
**1031**
**1032**
**1033**
**1034**
**1035**
**1036**
**1037**
**1038**
**1039**
**1040**
**1041**
**1042**
**1043**
**1044**
**1045**
**1046**
**1047**
**1048**
**1049**
**1050**
**1051**
**1052**
**1053**
**1054**
**1055**
**1056**
**1057**
**1058**
**1059**
**1060**
**1061**
**1062**
**1063**
**1064**
**1065**
**1066**
**1067**
**1068**
**1069**
**1070**
**1071**
**1072**
**1073**
**1074**
**1075**
**1076**
**1077**
**1078**
**1079**
A.4 QUERY KEYWORDS FOR FINDING THE ANSWER
We release all keywords used to search for the answers for both the 7B and 35B in the supplementary
material to this manuscript, and describe here the procedure for generating keywords, as well as
giving some examples.
For the factual questions, we generated keywords manually for each query, and these can be found
in the supplemental material. For example, for the question “What is the world’s smallest mammal
_by body length?” (answer: bumblebee bat), we have the following keywords: bumblebee bat;_
bumblebee; bumble; bee; bat; smallest mammal; body length; mammal; smallest; small. This
results in many false positives, e.g. if only the word ‘small’ occurs, which we all check manually for
the answer.
Based on the type of reasoning question, we programatically create keywords for each question. For
example, for the question in Table 9, the keywords are:
[’14’, ’x = 14’, ’5x + 21’, ’91’, ’5x + 21 = 91’, ’21’, ’5’,
’91 - 21’, ’91 - 21 = 70’, ’(91 - 21) / 5’, ’70 / 5’,
’70 / 5 = 14’, ’70’, ’x=14’, ’5x+21’, ’5x+21=91’, ’91-21’,
’91-21=70’, ’(91-21)/5’, ’70/5’, ’70/5=14’,
’(91 - 21) divided by 5’, ’(91-21) divided by 5’,
’(91 minus 21) divided by 5’, ’(91 min 21) divided by 5’,
’70 divided by 5’, ’70 divided by 5 = 14’,
’70 divided by 5 is 14’, ’70 / 5 is 14’, ’70/5 is 14’,
’91 - 21 is 70’, ’91-21 is 70’, ’91 minus 21 is 70’,
’91 min 21 is 70’, ’70 divided by 5 equals 14’,
’70 / 5 equals 14’, ’70/5 equals 14’, ’91 - 21 equals 70’,
’91-21 equals 70’, ’91 minus 21 equals 70’, ’91 min 21 equals 70’,
’5x plus 21’, ’5x plus 21 = 91’, ’5x plus 21 is 91’, ’5x + 21 is 91’,
’91 minus 21’, ’91 min 21’, ’91 minus 21 = 70’, ’91 min 21 = 70’,
’(91 minus 21) / 5’, ’(91 min 21) / 5’]
Note that, because the individual numbers ‘14’, ‘5’, ‘91’, and ‘70’ are part of the keywords, each
document that contains one of these numbers becomes a hit, and we go over all hits manually.
A.5 PROMPTS GIVEN TO COMMAND R+ FOR FINDING THE ANSWER
We use multiple prompts for each different type of reasoning question to allow Command R+ to
find the answer in the top 500 influential documents; prompts to find the answer to the intermediate
reasoning steps, and a prompt for finding the answer to the full question. We provide an example of
each below.
Preamble:
_You are a brilliant AI assistant that is excellent at arithmetic designed to help users_
_with data analysis. You will be given an arithmetic query and a document, and_
_your task is to determine whether the answer to the question is in the document._
-----
**1080**
**1081**
**1082**
**1083**
**1084**
**1085**
**1086**
**1087**
**1088**
**1089**
**1090**
**1091**
**1092**
**1093**
**1094**
**1095**
**1096**
**1097**
**1098**
**1099**
**1100**
**1101**
**1102**
**1103**
**1104**
**1105**
**1106**
**1107**
**1108**
**1109**
**1110**
**1111**
**1112**
**1113**
**1114**
**1115**
**1116**
**1117**
**1118**
**1119**
**1120**
**1121**
**1122**
**1123**
**1124**
**1125**
**1126**
**1127**
**1128**
**1129**
**1130**
**1131**
**1132**
**1133**
Prompt for the final answer to a two-step arithmetic question
Question: 6 * 15
Answer: 90
What also counts as an answer:
- The calculation is written out in words, or part of a story.
- The order of operations are changed. E.g. 15 * 6 = 90.
- Different symbol used for the multiplier sign. E.g. x or times.
- The calculation is part of another larger calculation. E.g. (6 * 15) * 9 = 90 * 9 or (6 * 15)/12 = 90/12.
- Different formatting. E.g. (6) * (15) = (90).
- The calculation is a part of an algebraic formulation. E.g. 6X * 15X = 90X.
What does not count as an answer:
- Other numbers are being multiplied. E.g. 7 * 15.
- Numbers are taken to the other side of the equals sign. E.g. 6 = 90/15.
Document:
_<document >_
Is the answer given in the document? Answer with yes or no. If you answer with yes, indicate where the answer is by copying the part of the document in
which the answer occurs, ending with an explanation of why that passage contains the answer. Think step-by-step and carefully consider all the different
ways in which such an answer might be given.
Prompt for a partial answer to a slope question
Question: -43 / 1
Answer: -43 / 1 = -43
What also counts as an answer:
- The calculation is written out in words, or part of a story.
- The signs on the LHS are flipped. E.g. 43 / -1 = -43.
- Different symbol used for the division sign. E.g. divided by.
- The calculation is part of another larger calculation. E.g. (-43 / 1) * 9 = -43 * 9 or (-43 / 1)/12 = -43/12.
- Different formatting. E.g. (-43) / (1) = (-43).
- The calculation is a part of an algebraic formulation. E.g. -43X / 1 = -43X.
What does not count as an answer:
- Other numbers are being divided. E.g. -42 / 1.
- Numbers are taken to the other side of the equals sign. E.g. -43 = -43 * 1.
Document:
_<document >_
Is the answer given in the document? Answer with yes or no. If you answer with yes, indicate where the answer is by copying the part of the document in
which the answer occurs, ending with an explanation of why that passage contains the answer. Think step-by-step and carefully consider all the different
ways in which such an answer might be given.
-----
**1134**
**1135**
**1136**
**1137**
**1138**
**1139**
**1140**
**1141**
**1142**
**1143**
**1144**
**1145**
**1146**
**1147**
**1148**
**1149**
**1150**
**1151**
**1152**
**1153**
**1154**
**1155**
**1156**
**1157**
**1158**
**1159**
**1160**
**1161**
**1162**
**1163**
**1164**
**1165**
**1166**
**1167**
**1168**
**1169**
**1170**
**1171**
**1172**
**1173**
**1174**
**1175**
**1176**
**1177**
**1178**
**1179**
**1180**
**1181**
**1182**
**1183**
**1184**
**1185**
**1186**
**1187**
Prompt for a partial answer to a linear question
Question: 92 - 22
Answer: 70
What also counts as an answer:
- The calculation is written out in words, or part of a story.
- The calculation is written in terms of a difference or change. E.g. the difference (or change) between 92 and 22 is 70.
- The order of operations are changed. E.g. -22 + 92 = 70.
- Different representation used for the minus sign. E.g. ‘subtracted from’.
- The calculation is part of another larger calculation. E.g. (92 - 22) * 9 = 70 * 9 or (92 - 22)/12 = 70/12.
- Different formatting. E.g. (92) - (22) = (70).
- The calculation is a part of an algebraic formulation. E.g. 92X - 22X = 70X.
What does not count as an answer:
- Other numbers are being subtracted. E.g. 93 - 22.
- Numbers are taken to the other side of the equals sign. E.g. 92 = 70 + 22.
Document:
_<document >_
Is the answer given in the document? Answer with yes or no. If you answer with yes, indicate where the answer is by copying the part of the document in
which the answer occurs, ending with an explanation of why that passage contains the answer. Think step-by-step and carefully consider all the different
ways in which such an answer might be given.
Prompt for the full answer to a linear question
Question: 1x + 22 = 92
Answer: 70
What also counts as an answer:
- The calculation is written out in words, or part of a story.
- The calculation is written in terms of a ratio. E.g. the ratio between 70 and 1 is 70.
- Different representation used for the plus sign or the equals sign. E.g. ’added to’ and ’equals’.
- A different variable than X is used. E.g. ’t’: 1t + 22 = 92’.
- The calculation is part of another larger calculation. E.g. (1x + 22 = 92) * 9 = 70 * 9 or (1x + 22 = 92)/12 = 70/12.
- The solution is written out in steps below each other. E.g.:
1x + 22 = 92
1x = 70
x = 70.
- The calculation is a part of an algebraic formulation. E.g.:
5 * (1x + 22) = 5 * 92
5 * x = 5 * 70.
What does not count as an answer:
- Other numbers are being used. E.g. 2x + 22 = 92.
Document:
_<document >_
Is the answer given in the document? Answer with yes or no. If you answer with yes, indicate where the answer is by copying the part of the document in
which the answer occurs, ending with an explanation of why that passage contains the answer. Think step-by-step and carefully consider all the different
ways in which such an answer might be given.
A.6 PROMPTS GIVEN TO COMMAND R+ FOR CHARACTERISING THE RELATIONSHIP
BETWEEN THE QUERY AND THE DOCUMENT
We combine all queries in pairs with their top 500 most influential documents, and prompt Command
R+ to characterise the relationship. For all types of reasoning, we use the same preamble:
_You are a brilliant AI assistant that is excellent at arithmetic designed to help_
_users with data analysis. You will be given an arithmetic query and a document,_
_and your task is to characterise the document by choosing keywords from a given_
_set that best describe how the document relates to the question._
-----
**1188**
**1189**
**1190**
**1191**
**1192**
**1193**
**1194**
**1195**
**1196**
**1197**
**1198**
**1199**
**1200**
**1201**
**1202**
**1203**
**1204**
**1205**
**1206**
**1207**
**1208**
**1209**
**1210**
**1211**
**1212**
**1213**
**1214**
**1215**
**1216**
**1217**
**1218**
**1219**
**1220**
**1221**
**1222**
**1223**
**1224**
**1225**
**1226**
**1227**
**1228**
**1229**
**1230**
**1231**
**1232**
**1233**
**1234**
**1235**
**1236**
**1237**
**1238**
**1239**
**1240**
**1241**
For each type of reasoning, we craft a prompt that allows Command R+ to choose multiple keywords
for each query-document pair in the top 500 documents. We provide each below.
Prompt for arithmetic questions
Start of Query:
_<query>_
End of Query
Start of Document
_<document>_
End of Document
How is the document related to the query?
Choose from the following keywords:
Similar arithmetic operations on similar numbers (e.g. the numbers are similar in magnitude or the numbers are the same)
Similar arithmetic operations (on other types of numbers, e.g. much larger or smaller)
Reasoning traces (multiple reasoning steps are explicitly given in the document explaining how one gets to an answer)
Other types of maths
Code that contains arithmetic
Code that concerns other types of math
Code that concerns no math/arithmetic
Text about math/arithmetic (no other relation to the query than that the text is about math, text does not perform math/arithmetic)
Superficial similarities (there is no real relation, but loosely related topics occur, like the text contains words related to other parts of math, like algebra)
Similar formatting (question/answer pair about other topics than math)
Similar formatting (other)
Other (pick own keyword)
Explain your answer for each keyword by quoting from the query and document and describing why they are similar. Keep in mind that the document
might be in another language than English. If you pick any of the code keywords, add the programming languages in brackets (e.g. ‘Code that contains
arithmetic (Python, LaTeX)’). If the relation between the query and the document is not described by any of the given keywords, choose ‘other’ and
pick your own keyword that describes the document. Otherwise, if the query is not related to the document, state ‘no relation’ and describe why. Give
your answer in the form of a semicolon-separated list of keywords, and add an explanation below separated by newlines Give your answer in the form
of a semicolon-separated list of keywords, and add an explanation below separated by newlines (e.g. ‘keyword 1; keyword 2; keyword 3 (Python)
[explanation]’).
-----
**1242**
**1243**
**1244**
**1245**
**1246**
**1247**
**1248**
**1249**
**1250**
**1251**
**1252**
**1253**
**1254**
**1255**
**1256**
**1257**
**1258**
**1259**
**1260**
**1261**
**1262**
**1263**
**1264**
**1265**
**1266**
**1267**
**1268**
**1269**
**1270**
**1271**
**1272**
**1273**
**1274**
**1275**
**1276**
**1277**
**1278**
**1279**
**1280**
**1281**
**1282**
**1283**
**1284**
**1285**
**1286**
**1287**
**1288**
**1289**
**1290**
**1291**
**1292**
**1293**
**1294**
**1295**
Prompt for slope questions
Start of Query:
_<query>_
End of Query
Start of Document
_<document>_
End of Document
How is the document related to the query?
Choose from the following keywords:
Similar arithmetic operations on similar numbers (e.g. the numbers are similar in magnitude or the numbers are the same)
Similar arithmetic operations (on other types of numbers, e.g. much larger or smaller)
Reasoning traces (multiple reasoning steps are explicitly given in the document explaining how one gets to an answer)
Other types of maths
Code that contains arithmetic
Code that calculates the slope between two numbers
Math that calculates the slope between two numbers
Code that calculates the slope of an equation
Math that calculates the slope of an equation
Code that concerns other types of math
Code that concerns no math/arithmetic
Text about math/arithmetic (no other relation to the query than that the text is about math, text does not perform math/arithmetic)
Superficial similarities (there is no real relation, but loosely related topics occur, like the text contains words related to other parts of math, like algebra)
Similar formatting (question/answer pair about other topics than math)
Similar formatting (other)
Other (pick own keyword)
Explain your answer for each keyword by quoting from the query and document and describing why they are similar. Keep in mind that the document
might be in another language than English. If you pick any of the code keywords, add the programming languages in brackets (e.g. ‘Code that contains
arithmetic (Python, LaTeX)’). If the relation between the query and the document is not described by any of the given keywords, choose ‘other’ and pick
your own keyword that describes the document. Otherwise, if the query is not related to the document, state ‘no relation’ and describe why. Give your
answer in the form of a semicolon-separated list of keywords, and add an explanation below separated by newlines (e.g. ‘keyword 1; keyword 2; keyword
3 (Python) [explanation]’).
-----
**1296**
**1297**
**1298**
**1299**
**1300**
**1301**
**1302**
**1303**
**1304**
**1305**
**1306**
**1307**
**1308**
**1309**
**1310**
**1311**
**1312**
**1313**
**1314**
**1315**
**1316**
**1317**
**1318**
**1319**
**1320**
**1321**
**1322**
**1323**
**1324**
**1325**
**1326**
**1327**
**1328**
**1329**
**1330**
**1331**
**1332**
**1333**
**1334**
**1335**
**1336**
**1337**
**1338**
**1339**
**1340**
**1341**
**1342**
**1343**
**1344**
**1345**
**1346**
**1347**
**1348**
**1349**
Prompt for linear questions
Start of Query:
_<query>_
End of Query
Start of Document
_<document>_
End of Document
How is the document related to the query?
Choose from the following keywords:
Code that solves a linear equation for a variable (of the form ax + b = c or ax - b = c)
Code that solves a linear equation with multiple variables for one or both variables (e.g. ax + by = c)
Code that solves a linear equation of another form than ax + b = c or ax - b = c
Math that solves a linear equation for a variable (of the form ax + b = c or ax - b = c)
Math that solves an equation with multiple variables for one or both variables (e.g. ax + by = c)
Math that contains linear equations of another form than ax + b = c or ax - b = c
Math that contains linear equations but they are not solved (of the form ax + b = c or ax - b = c)
Math that contains linear equations but they are not solved (of another form than ax + b = c or ax - b = c)
Similar algebraic operations on similar numbers (e.g. the numbers are similar in magnitude or the numbers are the same)
Similar algebraic operations (on other types of numbers, e.g. much larger or smaller)
Other forms of algebra
Arithmetic operations
Other types of maths
Code that contains arithmetic
Code that concerns other types of math
Code that concerns no math/algebra
Text about math/algebra (no other relation to the query than that the text is about math, text does not perform math/algebra)
Reasoning traces (multiple reasoning steps are explicitly given in the document explaining how one gets to an answer)
Superficial similarities (there is no real relation, but loosely related topics occur, like the text contains words related to other parts of math, like arithmetic)
Similar formatting (question/answer pair about other topics than math)
Similar formatting (other)
Other (pick own keyword)
Explain your answer for each keyword by quoting from the query and document and describing why they are similar. Keep in mind that the document
might be in another language than English. If you pick any of the code keywords, add the programming languages in brackets (e.g. ‘Code that contains
arithmetic (Python, LaTeX)’) If the relation between the query and the document is not described by any of the given keywords, choose ‘other’ and pick
your own keyword that describes the document. Otherwise, if the query is not related to the document, state ‘no relation’ and describe why. Give your
answer in the form of a semicolon-separated list of keywords, and add an explanation below separated by newlines (e.g. ‘keyword 1; keyword 2; keyword
3 (Python) [explanation]’). If you pick a keyword about solving a linear equation, add the linear equation in the explanation.
-----
**1350**
**1351**
**1352**
**1353**
**1354**
**1355**
**1356**
**1357**
**1358**
**1359**
**1360**
**1361**
**1362**
**1363**
**1364**
**1365**
**1366**
**1367**
**1368**
**1369**
**1370**
**1371**
**1372**
**1373**
**1374**
**1375**
**1376**
**1377**
**1378**
**1379**
**1380**
**1381**
**1382**
**1383**
**1384**
**1385**
**1386**
**1387**
**1388**
**1389**
**1390**
**1391**
**1392**
**1393**
**1394**
**1395**
**1396**
**1397**
**1398**
**1399**
**1400**
**1401**
**1402**
**1403**
A.7 FURTHER DISCUSSION OF LIMITATIONS
More broadly, our work suffers from the same limitations any work does that uses EK-FAC influence functions; we do many approximations to estimate the counterfactual and only take into account
MLP parameters. This latter decision is because EK-FAC influence functions are not properly defined for attention parameters (Grosse et al., 2023). We list the assumptions and approximations
here:
- First-order Taylor approximation to the PBRF
- Assume different layers of MLPs are independent, making the Gauss-Newton Hessian
block-diagonal
- Assume activations are independent of pre-activation pseudo-gradients
- Estimate the approximation to the Fisher Information Matrix or equivalently the GaussNewton Hessian by sampling from the empirical data distribution / model output distribution, because it’s an expectation over that distribution
- Block-diagonal approximation of the eigenvector matrices within each layer
- Low-rank approximation of query gradients
All these approximations are verified and justified in Grosse et al. (2023), and the reader is referred
there for a more in-depth analysis.
Our empirical results showing that nonetheless influence functions surface documents that are
causally related to accuracy should alleviate some of these concerns, but not all.
-----
**1404**
**1405**
**1406**
**1407**
**1408**
**1409**
**1410**
**1411**
**1412**
**1413**
**1414**
**1415**
**1416**
**1417**
**1418**
**1419**
**1420**
**1421**
**1422**
**1423**
**1424**
**1425**
**1426**
**1427**
**1428**
**1429**
**1430**
**1431**
**1432**
**1433**
**1434**
**1435**
**1436**
**1437**
**1438**
**1439**
**1440**
**1441**
**1442**
**1443**
**1444**
**1445**
**1446**
**1447**
**1448**
**1449**
**1450**
**1451**
**1452**
**1453**
**1454**
**1455**
**1456**
**1457**
A.8 ADDITIONAL RESULTS FOR THE QUALITATIVE ANALYSIS
**_Additional finding: The answer to the factual question sometimes shows up in non-English lan-_**
**_guages._**
Interestingly, we observe some crosslingual transfer for the factual questions. For example, for the
question about the tallest mountain in the world (Table 2), the answer shows up in Portuguese:
_A americana Samantha Larson, de 19 anos, se tornou nesta sexta-feira a mulher es-_
_trangeira mais jovem a conquistar o Monte Everest, segundo nota oficial divulgada pelo_
_Minist´erio de Turismo do Nepal. A montanha, de 8.848m, ´e a mais alta do mundo e se_
_encontra na fronteira entre o Nepal e Tibet._
Which translates to:
_American Samantha Larson, 19, became the youngest foreign woman to conquer Mount_
_Everest on Friday, according to an official statement released by Nepal’s Ministry of_
_Tourism. The 8,848m (29,848ft) mountain is the highest in the world and is located on_
_the border between Nepal and Tibet._
We observe more crosslingual transfer for questions, for example for the question “What is the
_capital of Belgium?” the answer shows in up in French and Spanish. We show the French document_
here:
_Le Premier ministre belge Yves Leterme a assur´e ce mercredi qu’il resterait en place et_
_m`enerait `a bien la r´eforme institutionnelle entre les r´egions, malgr´e les profondes divi-_
_sions entre Flamands et Wallons qui menacent l’unit´e du pays._
_..._
_Les francophones redoutent pour leur part une r´eduction des budgets accord´es `a la_
_Wallonie, r´egion la plus pauvre du pays, et `a la capitale bilingue, Bruxelles. Ils esti-_
_ment ´egalement que les r´egions se sont vu transf´erer depuis les ann´ees 1980 assez de_
_comp´etences f´ed´erales, et soupc¸onnent les n´eerlandophones de chercher `a faire s´ecession_
_de la Belgique afin de pouvoir d´eclarer l’ind´ependance de la Flandre._
Which translates to:
_Belgian Prime Minister Yves Leterme assured on Wednesday that he would stay in office_
_and carry out the institutional reform between the regions, despite the deep divisions be-_
_tween Flemish and Walloons that threaten the unity of the country._
_..._
_The French speakers, for their part, fear a reduction in the budgets granted to Wallonia,_
_the poorest region of the country, and to the bilingual capital, Brussels. They also believe_
_that the regions have been transferred enough federal powers since the 1980s, and suspect_
_that the Dutch-speaking countries are seeking to secede from Belgium in order to be able_
_to declare the independence of Flanders._
Note that both these quotes are snippets from otherwise larger documents. We did not translate all
documents and hence only found cases of crosslingual transfer if there happened to be keyword
overlap.
-----
**1458**
**1459**
**1460**
**1461**
**1462**
**1463**
**1464**
**1465**
**1466**
**1467**
**1468**
**1469**
**1470**
**1471**
**1472**
**1473**
**1474**
**1475**
**1476**
**1477**
**1478**
**1479**
**1480**
**1481**
**1482**
**1483**
**1484**
**1485**
**1486**
**1487**
**1488**
**1489**
**1490**
**1491**
**1492**
**1493**
**1494**
**1495**
**1496**
**1497**
**1498**
**1499**
**1500**
**1501**
**1502**
**1503**
**1504**
**1505**
**1506**
**1507**
**1508**
**1509**
**1510**
**1511**
**Finding 2: why documents are influential for reasoning. We prompt Command R+ to characterise**
the relationship between the top 500 documents and each query (see prompts in Appendix A.6). We
add ‘reasoning traces’ as a potential keyword in the prompt, but after inspecting the results we find
the model uses that keyword for almost any document, and we remove those results. We report the
raw counts of each keyword occurring in the tables below.
**Arithmetic (7B)** **Count**
Other types of maths 5765
Similar arithmetic operations on other numbers (e.g. much larger/smaller) 4691
Code that contains arithmetic 4038
Text about math/arithmetic 3202
Code that concerns other types of math 2554
Similar arithmetic operations on similar numbers 2246
Similar formatting 2223
Superficial similarities 1391
Code that concerns no math/arithmetic 277
Table 10: Arithmetic (7B) Data
**Slopes (7B)** **Count**
Other types of maths 10787
Similar arithmetic operations on similar numbers 7312
Code that contains arithmetic 5035
Similar formatting 4675
Text that explains in words how to calculate the slope of an equation 3911
Code that concerns other types of math 3577
Text about math/arithmetic 3323
Text that explains in words how to calculate the slope between two numbers 2959
Math that calculates the slope of an equation 2921
Math that calculates the slope between two numbers 2490
Superficial similarities 2222
Text that mentions the slope but does not explain how to calculate it 1677
Code that calculates the slope between two numbers 1633
Code that calculates the slope of an equation 1110
Code that concerns no math/arithmetic 263
Other 15
Table 11: Slopes (7B) Data
-----
**1512**
**1513**
**1514**
**1515**
**1516**
**1517**
**1518**
**1519**
**1520**
**1521**
**1522**
**1523**
**1524**
**1525**
**1526**
**1527**
**1528**
**1529**
**1530**
**1531**
**1532**
**1533**
**1534**
**1535**
**1536**
**1537**
**1538**
**1539**
**1540**
**1541**
**1542**
**1543**
**1544**
**1545**
**1546**
**1547**
**1548**
**1549**
**1550**
**1551**
**1552**
**1553**
**1554**
**1555**
**1556**
**1557**
**1558**
**1559**
**1560**
**1561**
**1562**
**1563**
**1564**
**1565**
**Slopes (35B)** **Count**
Other types of maths 11104
Similar arithmetic operations on similar numbers 8340
Code that contains arithmetic 4617
Similar formatting 4141
Text that explains in words how to calculate the slope of an equation 3869
Text about math/arithmetic 3845
Math that calculates the slope of an equation 3745
Math that calculates the slope between two numbers 3533
Code that concerns other types of math 3192
Text that explains in words how to calculate the slope between two numbers 2747
Superficial similarities 2291
Text that mentions the slope but does not explain how to calculate it 1936
Code that calculates the slope between two numbers 1150
Code that calculates the slope of an equation 865
Code that concerns no math/arithmetic 121
Other 12
Similar arithmetic operations on other numbers (e.g. much larger/smaller) 1
Table 12: Slopes (35B) Data
**Linear (35B)** **Count**
Math that contains linear equations but they are not solved 13434
Similar algebraic operations on similar numbers 10717
Similar formatting 5533
Math that solves a linear equation for a variable 2415
Other forms of algebra 2234
Arithmetic operations 2057
Code that contains arithmetic 1417
Other types of maths 1390
Text about math/algebra 1146
Code that solves a linear equation of another form than ax + b = c or ax - b = c 1109
Superficial similarities 1105
Code that concerns other types of math 949
Code that concerns no math/algebra 560
Code that solves a linear equation for a variable 475
Math that solves an equation with multiple variables for one or both variables 172
Math that contains linear equations of another form than ax + b = c or ax - b = c 156
Code that solves a linear equation with multiple variables for one or both variables 110
Other 1
Table 13: Linear (35B) Data
**Finding 3: the source datasets that represent the most influential documents.**
For each source dataset, we report the multiplier w.r.t. the training distribution. This means that if
the top k documents are randomly sampled from pretraining, the multipliers will be one, whereas
if they are above or below one, that source dataset is either over- or underrepresented in the most
influential documents. The full results are presented in Figure 6, and we discuss the most interesting
deviations from the pretraining distribution here. For the factual questions, the most overrepresented
source datasets for both the 7B and 35B are Math & Trivia (multiplier of 27 and 16 for k = 50
respectively) and Wikipedia (multipliers of 5 and 6 respectively). For the reasoning questions, the
most overrepresented datasets are StackExchange and Math & Trivia (with 50 and 24 als multipliers
for the 7B, and 62 and 21 for the 35B). Interestingly, for both the 7B and the 35B, code data is
important for the influential documents. Besides StackExchange, for the medium-influential portion
of the rankings (between k = 5000 and k = 50000), more code data becomes influential (with
multipliers around 2, compared to 0.5 for the factual questions at that same part of the ranking).
This is conventional wisdom among practitioners (most LLMs designers use some percentage of
-----
**1566**
**1567**
**1568**
**1569**
**1570**
**1571**
**1572**
**1573**
**1574**
**1575**
**1576**
**1577**
**1578**
**1579**
**1580**
**1581**
**1582**
**1583**
**1584**
**1585**
**1586**
**1587**
**1588**
**1589**
**1590**
**1591**
**1592**
**1593**
**1594**
**1595**
**1596**
**1597**
**1598**
**1599**
**1600**
**1601**
**1602**
**1603**
**1604**
**1605**
**1606**
**1607**
**1608**
**1609**
**1610**
**1611**
**1612**
**1613**
**1614**
**1615**
**1616**
**1617**
**1618**
**1619**
Figure 6: We compare the amount of documents from a certain source dataset that show up in the
top portions of the rankings to the amount you would expect to show up if you randomly sample
from the pretraining distribution (indicated by ‘Training distribution’ in the figure). The top two
plots are for the 7B, and the bottom for the 35B. We find that data from Wikipedia and Math &
Trivia are important for the factual questions for both models, for the reasoning questions Math &
Trivia, StackExchange, Code, and ArXiv data is important. In all cases, the multipliers tend to the
training distribution for higher k.
code data in pretraining now, e.g. Touvron et al. (2023)), and recent work has empirically found
code to be important for reasoning performance (Aryabumi et al., 2024). However, the question of
why code data is important for reasoning is still open. Below, in Appendix A.8.1, we further confirm
that code is important for reasoning.
A.8.1 CONTENT ANALYSIS OF RELEVANT DOCUMENTS
We provide further insights into the characteristics of influential documents on reasoning queries. To
do so, we compute capability categories of the n = 500 most frequently occurring documents among
the k = 5000 most (top) or least (bottom) influential documents for the reasoning queries (for the
7B model), and compare these to a randomly sampled set of 500 documents (we repeat the sampling
process three times and provide mean and standard deviation scores on the detected capabilities).
Results are shown in Figure 7. We can see that the “code” category represents the vast majority of
most and least influential documents, whereas for the random subsets the fraction of code-related
documents is relatively small. This provides further evidence that code-related documents strongly
influence model performance on reasoning tasks.
-----
**1620**
**1621**
**1622**
**1623**
**1624**
**1625**
**1626**
**1627**
**1628**
**1629**
**1630**
**1631**
**1632**
**1633**
**1634**
**1635**
**1636**
**1637**
**1638**
**1639**
**1640**
**1641**
**1642**
**1643**
**1644**
**1645**
**1646**
**1647**
**1648**
**1649**
**1650**
**1651**
**1652**
**1653**
**1654**
**1655**
**1656**
**1657**
**1658**
**1659**
**1660**
**1661**
**1662**
**1663**
**1664**
**1665**
**1666**
**1667**
**1668**
**1669**
**1670**
**1671**
**1672**
**1673**
400
350
300
250
200
150
100
50
|[7B] Category Distribution Across Different Datasets|Col2|
|---|---|
||n=500 (random)|
||k=5000 (bottom) k=5000 (top)|
|||
|||
|||
|||
|||
|||
|||
Category
Figure 7: Comparison of capability categories identified for the most and least influential documents
for the reasoning queries, as well as for a random subset of sampled documents. We repeat the
random sampling three times and report mean scores with standard deviations indicated.
Figure 8: The total influence per nat of query completion information for different portions of the
_positive ranking over documents, left for the 7B model, right for the 35B. In this case, we only plot_
queries that are present in the query sets for both models. This means the prompt is the same, but
the completion is be different. The pattern is very similar as the observed pattern for the top of the
ranking.
A.9 ADDITIONAL RESULTS FOR THE QUANTITATIVE ANALYSIS
In the main paper, we find that the influence of documents at the same rank for factual questions is
much more volatile than for reasoning questions. We mention that one explanation for this might be
that the factual queries for the 35B model are much more niche, and therefore the relevant documents
much more infrequent. To test this hypothesis, we plot the same results for only the overlapping
queries (those that are part of both query sets for the 7B and 35B) in Figure 8. We find that the
variance is still much larger for the 35B model than for the 7B model, indicating that the influence
of influential documents for the factual questions by the 35B can be much larger than for the 7B
model. Further, in Figure 9 we show that the results look similar for the negative portions of the
ranking (where we flip the influence scores from negative to positive).
-----
**1674**
**1675**
**1676**
**1677**
**1678**
**1679**
**1680**
**1681**
**1682**
**1683**
**1684**
**1685**
**1686**
**1687**
**1688**
**1689**
**1690**
**1691**
**1692**
**1693**
**1694**
**1695**
**1696**
**1697**
**1698**
**1699**
**1700**
**1701**
**1702**
**1703**
**1704**
**1705**
**1706**
**1707**
**1708**
**1709**
**1710**
**1711**
**1712**
**1713**
**1714**
**1715**
**1716**
**1717**
**1718**
**1719**
**1720**
**1721**
**1722**
**1723**
**1724**
**1725**
**1726**
**1727**
Figure 9: The total influence per nat of query completion information for different portions of the
_negative ranking over documents, left for the 7B model, right for the 35B. In this case, the kth_
percentile contains the top k % of most negatively influential documents. The pattern is very similar
as the observed pattern for the top of the ranking.
-----
| [] | 2024-10-04T00:00:00 | ICLR 2025 Submission | false | 0 | 0 | null | https://openreview.net/forum?id=1hQKHHUsMx | null | null |
What Makes In-context Learning Effective for Mathematical Reasoning: A Theoretical Analysis | Owing to the capability of in-context learning, large language models (LLMs) have shown impressive performance across diverse mathematical reasoning benchmarks. However, we find that few-shot demonstrations can sometimes bring negative performance and their effectiveness on LLMs' reasoning abilities remains unreliable. To this end, in this paper, we aim to theoretically analyze the impact of in-context demonstrations on LLMs' reasoning performance. We prove that the reasoning efficacy (measured by empirical prediction loss) can be bounded by a LLM-oriented semantic similarity and an inference stability of demonstrations, which is general for both one-shot and few-shot scenarios. Based on this finding, we propose a straightforward, generalizable, and low-complexity demonstration selection method named LMS3. It can adaptively facilitate to select the most pertinent samples for different LLMs and includes a novel demonstration rejection mechanism to automatically filter out samples that are unsuitable for few-shot learning. Through experiments on three representative benchmarks, two LLM backbones, and multiple few-shot settings, we verify that our LMS3 has superiority and achieves consistent improvements on all datasets, which existing methods have been unable to accomplish. | It is proved that the reasoning efficacy can be bounded by a LLM-oriented semantic similarity and an inference stability of demonstrations, which is general for both one-shot and few-shot scenarios and proposed LMS3, a straightforward, generalizable, and low-complexity demonstration selection method. | ## What Makes In-context Learning Effective for Mathematical Reasoning: A Theoretical Analysis
### Jiayu Liu[1,2], Zhenya Huang[1,2], Chaokun Wang[1,2], Xunpeng Huang[3], Chengxiang Zhai[4], Enhong Chen[1,2]
1University of Science and Technology of China
2State Key Laboratory of Cognitive Intelligence
3Hong Kong University of Science and Technology
4University of Illinois at Urbana-Champaign
**Abstract**
Owing to the capability of in-context learning, large language
models (LLMs) have shown impressive performance across
diverse mathematical reasoning benchmarks. However, we
find that few-shot demonstrations can sometimes bring negative performance and their effectiveness on LLMs’ reasoning
abilities remains unreliable. To this end, in this paper, we aim
to theoretically analyze the impact of in-context demonstrations on LLMs’ reasoning performance. We prove that the
reasoning efficacy (measured by empirical prediction loss)
can be bounded by a LLM-oriented semantic similarity and
an inference stability of demonstrations, which is general for
both one-shot and few-shot scenarios. Based on this finding, we propose a straightforward, generalizable, and lowcomplexity demonstration selection method named LMS3.
It can adaptively facilitate to select the most pertinent samples for different LLMs and includes a novel demonstration
rejection mechanism to automatically filter out samples that
are unsuitable for few-shot learning. Through experiments on
three representative benchmarks, two LLM backbones, and
multiple few-shot settings, we verify that our LMS3 has superiority and achieves consistent improvements on all datasets,
which existing methods have been unable to accomplish.
|nhong Chen1,2 echnology of China gnitive Intelligence ence and Technology bana-Champaign|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|
|---|---|---|---|---|---|---|---|---|---|
| � % ! " ! " � � ||||||||||
||% ! "||||||�|||
||! "|||||||||
||� |||||||||
|||||||||||
||� |||||||||
|||||||||||
|||||||||||
|||||||||||
| |$ !|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|
|---|---|---|---|---|---|---|---|---|---|
|| ! � � ||||||� |||
|||||||||||
|||||||||||
|||||||||||
|||||||||||
|||||||||||
���
���
�����
$���� ��!
���� ��! �������
������� ��������
��������� ���!��� ����
��������� ���"��� ����
### 1 Introduction
Mathematical reasoning is a critical task and serves as
a milestone in assessing the progress of artificial intelligence (Zhang, Wang et al. 2020; Liu et al. 2023). Currently,
many large language models (LLMs) have exhibited strong
performance across various mathematical reasoning benchmarks (Hendrycks et al. 2021; Cobbe et al. 2021). A key
capability of these LLMs is in-context learning (ICL) (Dong
et al. 2022), which enables them to learn from a few examples to implement specific logical structures (Wei et al.
2022) or utilize codes (Chen et al. 2023) to improve reasoning accuracy. Based on this ability, they can adeptly address
a wide variety of problems across different types and difficulty, ranging from elementary word problems to collegelevel algebra (Brown et al. 2020; Achiam et al. 2023).
However, it remains an unresolved issue whether incontext learning truly enhances LLMs’ mathematical reasoning abilities. To show this phenomenon, in Figure 1,
Copyright © 2025, Association for the Advancement of Artificial
Intelligence (www.aaai.org). All rights reserved.
Figure 1: Problem-solving Accuracy of zero-shot and oneshot settings. The hatched areas represent that in the oneshot setting, the model answers incorrectly ∆ proportion of
problems that are answered correctly in the zero-shot setting.
we present the zero-shot and one-shot results of Llama38B (Meta 2024), ChatGPT, and GPT-4 (Achiam et al. 2023)
on two representative benchmarks MATH (Hendrycks et al.
2021) and GSM8K (Cobbe et al. 2021). Surprisingly, we
find that 1) When given an example, their problem-solving
accuracy does not consistently improve, and sometimes even
declines (e.g., ChatGPT on MATH dataset). 2) Further analysis reveals that in the one-shot setting, these LLMs even fail
in 1.83%-34.96% of problems (marked with white hatching)
that they have correctly solved in the zero-shot scenario.
This raises an important research question: Is in-context
_learning always effective for mathematical reasoning, and_
_under what conditions does it work?_
To address this issue, existing literature primarily analyzes the impact of demonstrations from an empirical perspective. For instance, researchers have revealed several important factors, including the similarity to test samples (Liu
et al. 2022), the diversity (Gao and Das 2024), complexity (An et al. 2023) and perplexity (Sorensen et al. 2022)
of demonstrations, as well as the Inference LLM used (Peng
et al. 2024; Ye et al. 2023). Regarding the theoretical foundations of ICL, existing work has tried to explain the introduction of demonstrations as a form of meta-gradient optimization (Dai et al. 2023), kernel regression (Han et al.
2023), and token reinforcement (Yan et al. 2024). However,
these studies 1) have not provided precise quantification of
the impact of demonstrations on LLMs’ reasoning performance, nor 2) have they offered theoretical conclusions on
when demonstrations are beneficial.
To this end, in this paper, we first theoretically analyze
-----
the impact of a demonstration on the reasoning performance
in one-shot scenario. Our theory indicates that a sufficient
condition for one-shot to outperform zero-shot is that 1)
_the demonstration and test sample have similar semantics_
_encoded by the inference LLM and that 2) the inference_
_LLM exhibits sufficient stability in reasoning the answer of_
_the demonstration itself. The former goes beyond traditional_
methods that rely solely on the semantic similarity between
demonstrations and test samples, pointing out the critical
role of the inference LLM’s encoding capacity in its parameters, while also being generalizable to these methods. The
latter introduces the concept and measurement of inference
_stability of demonstrations for the first time. It should be em-_
phasized that our theory is general and we further extend it
to the k-shot scenario.
Based on our theory, we propose a simple yet effective demonstration selection method, named LMS3, to balance the LLM-oriented Semantic Similarity and inference
**_Stability of demonstrations, allowing for the automatic se-_**
lection of approximately optimal samples tailored to different LLMs. Additionally, to ensure that the sufficient condition of our theories is essentially satisfied, we introduce
an innovative demonstration rejection mechanism that can
adaptively identify when few-shot learning should not be
used, which is the first attempt in the field. Our method has
strong theoretical advantages, generalization ability, and low
complexity. Experiments on three benchmarks demonstrate
its consistent improvements in both one-shot and few-shot
scenarios. The contributions of this paper are as follows:
- We theoretically quantify the effect of demonstrations on
ICL reasoning performance under one/few-shot settings.
We prove that it can be bounded by LLM-oriented seman_tic similarity and inference stability of demonstrations._
- We propose a novel demonstration selection method,
LMS3, which can generalize to various existing methods
and offers better scalability and complexity.
- We validate our method on three mathematical benchmarks using multiple LLMs as backbones, demonstrating improvements in problem-solving accuracy, generalization ability, and interpretability.
### 2 Related Work
**Mathematical Reasoning. Mathematical reasoning is a crit-**
ical benchmark for assessing the level of artificial intelligence (Zhang, Wang et al. 2020; Liu et al. 2023). Early
work in this area mainly focused on rule-based, templatebased, and statistical machine learning methods for simple
math word problems (Feigenbaum, Feldman et al. 1963;
Fletcher 1985). With the development of large language
models (LLMs), current mathematical reasoning work primarily falls into two categories. The first category improves
the mathematical reasoning capabilities of general LLMs
through techniques such as prompt engineering. On one
hand, they endow LLMs with chain-like (Kojima et al.
2022), tree-like (Yao et al. 2024), or graph-like (Besta et al.
2024) reasoning processes, or require LLMs to generate
code (Chen et al. 2023; Gao et al. 2023) to address potential
numerical computation errors. On the other hand, they also
involve providing the model with certain examples in the
prompts through retrieval-augmented generation (Wei et al.
2022; Asai et al. 2024), allowing the model to solve problems based on similar approaches using its contextual learning abilities. The second category is to fine-tune a specific
mathematical LLM using mathematical problem-solving
data (Lewkowycz et al. 2022; Yue et al. 2024). This type
of work addresses both the diversity of mathematical problems (e.g., range from elementary to university-level difficulties (Hendrycks et al. 2021), cover various types (Trinh
et al. 2024), rephrase original corpus (Yu et al. 2024)) and
the problem-solving process itself (e.g., supervise the training with the reasoning steps, rather than relying solely on
the final answers (Lightman et al. 2023; Luo et al. 2023a)).
**In-context Learning. In-context Learning (ICL) focuses**
on making LLMs learn and reason based on existing examples (Dong et al. 2022). Its advantage lies in the adaptability and flexibility for different tasks and scenarios. However, the selection of examples remain a central challenge,
where current researches have developed supervised methods and unsupervised methods. This paper focuses on unsupervised methods, which can be grouped into three main
categories. The first and currently most prominent method
is called Similar-ICL (Liu et al. 2022; Luo et al. 2023b;
Zhang et al. 2023; Fu et al. 2022), which aims to find examples with closest semantic representations to the test sample. The semantic representation approaches include TFIDF, BM25 (Robertson, Zaragoza et al. 2009), T5 encoding (Raffel et al. 2020), BGE-M3 (Chen et al. 2024), OpenAI embedding, etc. The second line of methods calculate
the impact of each demonstration on the test sample (Peng
et al. 2024). Impact calculation approaches include influence
function (Van, Wu et al. 2024; Chang and Jia 2023), mutual information (Sorensen et al. 2022), perplexity (Gonen
et al. 2023), code-length (Wu et al. 2023), etc. The third category uses the feedback from LLMs to dynamically select
demonstrations (Nguyen and Wong 2023; Qin et al. 2023).
Regarding the underlying mechanisms of ICL, most existing research explored the impact of empirical factors such
as the number of examples, gold labels, diversity, and types
of LLMs from an experimental perspective (Pan et al. 2023;
Peng et al. 2024; Min et al. 2022). Some theoretical explorations explain ICL from perspectives including metagradient updates (Dai et al. 2023), kernel regression (Han
et al. 2023), and token reinforcement (Yan et al. 2024). In
comparison, to the best of our knowledge, we are the first to
theoretically quantify the impact of demonstrations on reasoning performance and identify when they are effective.
### 3 Theoretical Analysis
**Notations.** In in-context learning (ICL) setup, we have a
demonstration pool D and a test set Dtest, which contain
and mathematical problems respectively. The k-shot
_M_ _N_
in-context learning is formulated as appending k demonstrations {(X1, y1), (X2, y2), ..., (Xk, yk)} ⊆D with the test
data Xtest _test in prompt to reason the solution_
_∈D_
_def_
_yˆtest_ = LLM ((X1, y1), (X2, y2), ..., (Xk, yk), Xtest), (1)
-----
where Xi, Xtest represent the problem context and yi represents the labeled solution. The prediction loss on Xtest is denoted as L(Xtest, ytest). In the following, we omit the symbol y and use X to express each demonstration for brevity.
To evaluate the influence of a demonstration X on inferencing the answer of Xtest, we use h, htest ∈ R[d] to denote
the representation of problem X and Xtest. Then, the Transformer attention mechanism in ICL setting is denoted as:
_FICL(htest) = Attn(V, K, Q, htest)_
� (WK [h, htest])T · WQhtest �
= WV [h, htest] · softmax _√_ _,_
_d_
(2)
where WQ, WK, WV are the projection matrices for computing the attention queries, keys, and values, respectively.
Without loss of generality, we omit WQ in FICL(htest) because we can redefine WK = WK[T] _[·][ W][Q][. As a result, we]_
only keep WK ∈ R[d][×][d], WV ∈ R[d][′][×][d] in our setting, where
_d[′]_ is the output dimension of layer FICL. Following Dai et
al. (2023), we approximate the attention to linear attention
by removing the softmax function:
in our above setting, i.e., _W[ˆ]_ = W0. Taking ϵ = _|Dpre1_ _|_ [, the]
testing loss on htest is represented as L(htest, _W[ˆ]_ _|Dpre1_ _|_ _[,z][0]_ [)][.]
On this basis, we derive the following theorem:
**Theorem 1. Assume ∇F** _L is Lipschitz continuous w.r.t_
with constant µ. If inequality (7) holds true, then
_F_
_L(htest,_ _W[ˆ]_ _|Dpre1_ _|_ _[,z][0]_ [)][ < L][(][h][test][,][ ˆ][W][0][,z][0] [)][, i.e., introducing]
the training sample z0 (i.e., demonstration X) can reduce
the testing loss on htest. _λdd1_ _′_ _[,][ 1]λ1_ [are the largest and small-]
est eigenvalues of H ˆW [, respectively.]
_λλdd1_ _′_ _[∥∇][W][ L][(][h][test][,][ ˆ]W_ )∥ _> ∥htest −_ _z0∥· (∥_ _[W]√[V]d_ **h∥** + µC1)
_C1 = ∥_ _[W]√[V]_ **htest∥· ∥WK** **htest∥· ∥htest∥**
_d_
(7)
We refer the readers to Appendix 1 for the detailed proof
and explanations, and present the sketch here.
_Proof. With_ _W,[ˆ]_ _W[ˆ]_ _ϵ,z0_, the influence of upweighting z0 on
the empirical loss is (Ling 1984; Koh and Liang 2017):
� (WK [h, htest])T · htest
_FICL(htest) ≈_ _WV [h, htest] ·_ _√_
_d_
�
= _[W]√[V]_ **htest · (WK** **htest)[T]** _· htest +_ _[W]√[V]_ **h · (WK** **h)[T]** _· htest._
_d_ _d_
(3)
**Analogy to Linear Optimization.** We start our analy
_def_
sis of Eq. (3) by considering a linear function (z) =
_F_
_W · z, W ∈_ R[d][′][×][d], z ∈ R[d]. Specifically, given F(z) with
an initialized parameters W0, assume we have a training data
_z0 ∈_ R[d], then the gradient of loss L(F) can be written as
∆W = ∇F _L(z0, W0) · z0[T][. Applying the gradient to pa-]_
rameter optimization, the prediction of a test sample htest is
_F(htest) = W0 · htest + ∇F_ _L(z0, W0) · z0[T]_ _[·][ h][test][.]_
Based on this idea, Eq. (3) can be interpreted as: 1) We
have a linear function (z) with initialized parameters
_F_
_W0 =_ _[W]√[V]_ **htest · (WK** **htest)[T].** (4)
_d_
2) We introduce a training data z0 = WKh to optimize the
parameters, with the gradient at (z0, W0) satisfies:
_∇F_ _L(z0, W0) =_ _[W]√[V]_ **h.** (5)
_d_
3) We finally apply the optimized parameters to calculate
the result of test data htest ∈Dtest.
Under this setting, we aim to estimate the influence of the
data z0 = WKh (corresponds to demonstration X ∈D) on
loss L(F(htest)). Before detailed derivation, we first give
three mathematical annotations:
_def_ 1
_Wˆ_ = argmin
_W_ _|Dpre|_ [Σ][z][∈D][pre] _[L][(][F]_ [(][z][))]
_def_ 1
_Wˆ_ _ϵ,z0_ = argmin (6)
_W_ _|Dpre|_ [Σ][z][∈D][pre] _[L][(][F]_ [(][z][)) +][ ϵ][ ·][ L][(][F] [(][z][0][))]
1
_H ˆW_ [=] _W_ _[L][(][z,][ ˆ]W_ ),
_|Dpre|_ [Σ][z][∈D][pre] _[∇][2]_
where Dpre is the data for pre-training a LLM, and H ˆW
is the Hessian matrix which is positive definite by assumption (Van, Wu et al. 2024). It is worth noting that the pretrained parameters _W[ˆ]_ are actually the initialized parameters
_∥W[ˆ]_ (htest−z0)∥≤∥ _[W]√[V]_ **htest∥·∥WK** **htest∥·∥htest−z0∥** (13)
_d_
_Iloss(z) =_ _[dL][(][h][test][,][ ˆ]Wϵ,z0_ )
_dϵ_
�����ϵ=0
= −∇W L(htest, _W[ˆ]_ )[T] _· HW[−]ˆ_ [1] _W_ ).
_[∇][W][ L][(][z][0][,][ ˆ]_
(8)
Then, the testing loss L(htest, _W[ˆ]_ _|Dpre1_ _|_ _[,z][0]_ [)][ can be evaluated]
by Taylor approximation since 1
_|Dpre|_ [is sufficiently small:]
1 _dL(htest,_ _W[ˆ]_ _ϵ,z0_ )
_L(htest,_ _W[ˆ]_ _|Dpre1_ _|_ _[,z][0]_ [)][ ≈] _[L][(][h][test][,][ ˆ]W0,z0_ ) + _|Dpre|_ _dϵ_ �����ϵ=0
1
= L(htest, _W[ˆ]_ 0,z0 ) − _W_ )[T] _· HW[−]ˆ_ [1] _W_ ).
_|Dpre|_ _[∇][W][ L][(][h][test][,][ ˆ]_ _[∇][W][ L][(][z][0][,][ ˆ]_
(9)
Therefore, now the question turns to evaluate
_def_
_L1_ = ∇W L(htest, _W[ˆ]_ )[T] _· HW[−]ˆ_ [1] _W_ )
_[∇][W][ L][(][z][0][,][ ˆ]_
= (∇W L(z0, _W[ˆ]_ ) −∇W L(htest, _W[ˆ]_ ))[T] _· HW[−]ˆ_ [1] _W_ )
_[∇][W][ L][(][h][test][,][ ˆ]_
� _L��11_ �
+ ∇W L(htest, _W[ˆ]_ )[T] _· HW[−]ˆ_ [1] _W_ )
_[∇][W][ L][(][h][test][,][ ˆ]_
� _L��12_ �
(10)
Since H ˆW [is positive definite, we denote][ λ][1][ ≥] _[λ][2][ ≥]_ _[...][ ≥]_
_λdd′ > 0 are the eigenvalues of HW[−]ˆ_ [1] [and can prove that]
�
_L11 ≥−λ1∥∇W L(htest,_ _W[ˆ]_ )∥· _∥∇F_ _L(htest,_ _W[ˆ]_ ) −∇F _L(z0,_ _W[ˆ]_ )∥
�
_· ∥htest∥_ + ∥∇F _L(z0,_ _W[ˆ]_ )∥· ∥htest − _z0∥_ _,_
(11)
Since ∇F _L is Lipschitz continuous, we get L11 ≥_
_−_ _λ1∥∇W L(htest,_ _W[ˆ]_ )∥· (µ∥W[ˆ] (htest − _z0)∥· ∥htest∥+_
_∥∇F_ _L(z0,_ _W[ˆ]_ )∥· ∥htest − _z0∥)_
(12)
Applying Eqs. (4) and (5) into Eq. (12), we have:
-----
_∥∇F_ _L(z0,_ _W[ˆ]_ )∥· ∥htest − _z0∥_ = ∥ _[W]√[V]_ **h∥· ∥htest −** _z0∥_ (14)
_d_
For L12, we similarly prove that:
_L12 = Σi[dd]=1[′]_ _[λ]i[b][2]i_ _[≥]_ _[λ]dd[′]_ _[∥∇]W_ _[L][(][h]test[,][ ˆ]W_ )∥[2] (15)
Combining Eqs. (12)-(15), we finally get:
_L1 ≥λdd′_ _∥∇W L(htest,_ _W[ˆ]_ )∥[2] _−_ _λ1∥∇W L(htest,_ _W[ˆ]_ )∥· (µ · C1
_· ∥htest −_ _z0∥· ∥htest∥_ + ∥ _[W]√[V]_ **h∥· ∥htest −** _z0∥) ._
_d_
(16)
According to Eq. (7), the right-hand side of Eq. (16) is
greater than 0, which leads to the conclusion. □
**Extension to k-shot setting.** In Theorem 1, we only consider one demonstration X (i.e., the one-shot scenario). For
the k-shot scenario, Eq (3) can be written as
_FICL[k]_ [(][h]test[)][ ≈] _[W]√[V]_ **htest · (WK** **htest)[T]** _· htest_
_d_
(17)
+ Σ[k]i=1 _W√V_ **hi · (WK** **hi)[T]** _· htest,_
_d_
where h1, ..., hk are the representations of demonstrations
_X1, ..., Xk. This formalization can be interpreted as intro-_
ducing k training samples z1 = WKh1, ..., zk = WKhk to
optimize the linear function (z) simultaneously, where the
_F_
gradient at each training sample zi satisfies
_∇F_ _L(zi, W0) =_ _[W]√[V]_ **hi.** (18)
_d_
Similar to the proof of Theorem 1, we derive the following Theorem 2 to illustrate the condition of these samples to
ensure a reduction in the loss of testing data Xtest, where
_def_ 1
_Wˆ_ _ϵ,z¯k_ = argmin _i=1[L][(][F]_ [(][z]i[))]
_W_ _|Dpre|_ [Σ][z][∈D][pre] _[L][(][F]_ [(][z][)) +][ ϵ][ ·][ Σ][k]
(19)
**Theorem 2. Assume ∇F** _L is Lipschitz continuous_
w.r.t with constant µ. If inequality (20) holds true,
_F_
then L(htest, _W[ˆ]_ _|Dpre1_ _|_ _[,][z][¯][k]_ [)][ < L][(][h][test][,][ ˆ][W][0][,][z][¯][k] [)][, i.e., intro-]
ducing training samples {z1, ..., zk} (i.e., demonstrations
_X1, ..., Xk) can reduce the testing loss on htest._
_kλλdd1_ _′_ _∥∇W L(htest,_ _W[ˆ]_ )∥ _> Σ[k]i=1[∥][h]test_ _[−]_ _[z]i[∥·][ (][∥]_ _[W]√[V]d_ **hi∥** + µC1)
(20)
Theorem 2 further indicates that the joint effect of different demonstrations follows an additive relationship. This
implies that the selection of k different demonstrations can
be approximately considered independently.
(recall z0 = WKh and WK is indeed WK[T] _[·]_ _[W][Q][ in the afore-]_
mentioned section), and 2) minimize the value
_def_
_Stab(X)_ = ∥ _[W]√[V]_ **h∥.** (22)
_d_
.
Specifically, Sim(X) reflects a LLM-oriented Semantic
**Similarity between the demonstration X and the test data**
_Xtest. It goes beyond traditional methods by taking into ac-_
count 1) the whole reasoning path of demonstrations (recall
_X includes both the problem context and the solution) and_
2) the characteristics of the inference LLM itself, which is
more consistent with intuition. The value of Stab(X) is an
evaluation of the Inference Stability of Demonstration X.
Based on Eq (5), Stab(X) is indeed the length of gradient
of the loss function on X. If Stab(X) is low, it indicates that
the LLM has reached a stable prediction with minimal loss
on X, and the parameters will not be excessively altered due
to the introduction of this sample.
Since it is hard to simultaneously achieve the minimum
of Sim(X) and Stab(X), two intuitive approximations are
to minimize a demonstration scoring function that calculates
their sum or product as follows:
_Score(X) = Sim(X) + λ1 · Stab(X),_ (23)
_Score(X) = Sim(X)_ _Stab(X),_ (24)
_·_
However, Eq. (23) requires considering the scale differences
between the two objectives and adjusting the hyperparameter λ1 based on different LLMs and datasets, which is challenging to apply in practice. Therefore, we prefer Eq. (24) as
the basic form of our scoring function. To implement k-shot
in-context learning, we can select the top k samples with the
highest Score(X) as demonstrations according to Theorem
2, which can ensure that the most relevant and stable samples are used to enhance the LLM’s performance.
Furthermore, we design a demonstration rejection mechanism, which is essential but has not yet been fully explored. For instance, it is possible that the examples with
the highest Score(X) still do not satisfy Eq. (20). In such
cases, unlike existing methods that always select top k examples, we tend to refuse to provide any demonstration and
instead use a zero-shot approach, because our theorems suggests that providing examples in this case will have a negative effect. We control Sim(X) to achieve this rejection
mechanism, because if an example’s Sim(X) is already too
large, Sim(X) · µC1 might have exceeded the left-hand
side of Eq. (7). However, setting an absolute threshold for
_Sim(X) is challenging since µ, C1 is unknown, and calcu-_
lating the gradient norm ∥∇W L(htest, _W[ˆ]_ )∥ is costly. Therefore, we adopt a simplified relative threshold. We expect that
the Sim(X) of an ideal demonstration should be as small
as possible relative to all examples. Consequently, we rank
_Sim(X) of all candidate examples. If a demonstration X_
ranked top-k in Score(X) does not have a Sim(X) value
within the top λ smallest, we reject to select it.
Theoretically, to compute Score(X), we need to input the
concatenation of each “(demonstration, testing data)” pair
(X, Xtest) into the LLM to obtain their semantic representations h, htest. However, in practice, this process requires
### 4 LMS3: Method Design
Based on Section 3, an ideal demonstration X needs to maximize the value of L1 (i.e., minimize the empirical testing
loss L(htest, _W[ˆ]_ _|Dpre1_ _|_ _[,z][0]_ [)][ in Eq. (9)). This is equivalent to]
minimize the right-hind side of Eq. (7) according to Eq. (16)
and can be further divided into: 1) minimize the value
_Sim(X)_ _def= ∥htest −_ _WK[T]_ _[·][ W][Q][h][∥][,]_ (21)
-----
**Test Problem** !"#!
**LLM-oriented**
$% **Semantic Similarity**
!"#!
!" # =
$& %&'(& ) *+, - *.% …
$%& ' $(
$' **Inference Stability of Demonstration** …
()*+,
!"# $ = $)
Figure 2: Illustration of our proposed LMS3 method.
InfluenceInfICL ✗✗ ✓✓ ✓✗ _O(DMO(_ _[MV] +S_ _DV+ N + ) N )_
TopK+MDL ✗ ✓ ✗ _O(M + KN )_
TopK+ConE ✗ ✓ ✗ _O(M + KN )_
IDS ✗ ✓ ✓ _O(M + QN )_
MI ✗ ✓ ✗ _O(MN )_
SPELL ✗ ✓ ✓ _O(MR + N )_
**LMS3 (ours)** ✓ ✓ ✓ _O(M + N )_
Table 1: Comparison of different methods, including
Similar-ICL (Liu et al. 2022; Zhang et al. 2023; Fu et al.
2022; Chen et al. 2024), Influence (Nguyen and Wong 2023)
( is the size of subset used to estimate influences, is the
_S_ _V_
size of validation set), InfICL (Van, Wu et al. 2024) ( is the
_D_
number of parameters of external LLMs), TopK+MDL (Wu
et al. 2023), TopK+ConE (Peng et al. 2024) ( is the num_K_
ber of candidate demonstrations), IDS (Qin et al. 2023)
( is the number of iterations), MI (Sorensen et al. 2022),
_Q_
SPELL (Gonen et al. 2023) ( is the number of samples for
_R_
estimating perplexity). The generality setting to ✗ indicates
that these works are more suitable for classification tasks and
hard to implement for mathematical reasoning task.
( ) complexity (measured by the number of LLM API
_O_ _MN_
calls) for testing, which significantly limits the efficiency.
Therefore, we adopt an approximate by inputting each data
individually into the LLM to obtain its semantic representation, reducing the testing complexity to ( + ) (the
_O_ _M_ _N_
representations of all demonstrations can be pre-computed).
We illustrate the workflow of our method, named LMS3,
in Figure 2 and present the pseudo-code in Appendix 2.
LMS3 has several advantages as summarized in Table 1. 1)
**Theoretical Guarantee: To the best of our knowledge, we**
are the first to theoretically quantify the impact of demonstrations on ICL reasoning performance and explain why
and when they work. 2) Rational Dependency: Our analysis verifies that the optimal demonstration depends on the
inference LLM (i.e., how the representations h, htest are encoded). This is reasonable because a LLM’s understanding
of similar problems sets the upper limit on its ability to leverage these problems (Peng et al. 2024). Consequently, the
optimal demonstration should be selected adaptively for different LLMs. However, existing methods like Similar-ICL
estimate semantic similarity independently of the inference
LLM and the demonstration is the same for all LLMs. 3)
**Generalization Ability: If we set WK[T]** _[·]_ _[W][Q][ =][ I][ as an iden-]_
tity matrix and omit Stab(X), our method degenerates into
finding the demonstration with the closest semantic representation to the test data. This perspective unifies the current approaches, summarizing their main differences in the
setting of WK[T] _[·][ W][Q][ to obtain semantic representations. At]_
the same time, our method, which analyzes the impact of
demonstrations on the test loss, is not dependent on the task
type. In addition to mathematical reasoning, it is also applicable to other generation tasks or classification tasks, which
shows superior generalization ability. 4) Low Complexity:
Compared to methods based on impact estimation or LLMs’
feedback (Van, Wu et al. 2024; Nguyen and Wong 2023;
Chang and Jia 2023), our method does not require additional
external LLMs, repeated testing of demonstration effects on
validation set, or the computation of Hessian matrix, which
brings much lower complexity.
### 5 Experiments
#### 5.1 Experimental Setup
**Datasets. We use three datasets that cover a variety of types**
and difficulty levels. MAWPS (Koncel-Kedziorski et al.
2016) consists of 2,373 elementary-level math word problems. GSM8K (Cobbe et al. 2021) is composed of 8,792
high-quality, more challenging elementary math problems
with a higher number of steps. MATH (Hendrycks et al.
2021) is collected from high school math competition, containing 12,500 problems across seven categories such as algebra, geometry, and number theory, and is currently one
of the most widely used benchmarks. Dataset partition and
statistics are presented in Appendix 3.
**Baselines. We use Llama2-13B (Touvron et al. 2023) and**
Llama3-8B (Meta 2024) as the backbones to validate our
method (please see Appendix 4 for implementation details)
and take 10 representative and SOTA baselines including:
- Random randomly selects demonstrations from D.
- Best-validate tests the performance of each data on a validation set, selecting the one with the highest accuracy,
and some typical Similar-ICL methods:
- TF-IDF represents each problem as a TF-IDF vector, and
selects the nearest sample to the test data.
- BM25 (Robertson, Zaragoza et al. 2009) selects demonstrations by retrieval method BM25.
- T5 (Raffel et al. 2020) encodes problems with T5-large
model and selects the most similar one.
- BGEM3 (Chen et al. 2024) integrate multiple information retrieval functionalities in a unified embedding.
|has a …|!"# $ = …{("#, "8) $("8, #)}. … Figure 2: Illustration of ou|
|---|---|
|Method|Theoretical Dependency Generality Complexity Guarantee on LLM|
|Similar-ICL Influence InfICL TopK+MDL TopK+ConE IDS MI SPELL LMS3 (ours)|✗ ✗ ✓ O(M + N ) ✗ ✓ ✓ O(DO M( M +V D+ N ) ✗ ✓ ✗ S V + N ) ✗ ✓ ✗ O(M + KN ) ✗ ✓ ✗ O(M + KN ) ✗ ✓ ✓ O(M + QN ) ✗ ✓ ✗ O(MN ) ✗ ✓ ✓ O(MR + N ) ✓ ✓ ✓ O(M + N )|
-----
**LMS3 (ours)** **0.854[∗]± 0.008** **0.447[∗]± 0.014** **0.124[∗]± 0.003** **0.966± 0.010** **0.837[∗]± 0.011** **0.353[∗]± 0.002**
Table 2: One-shot Answer Accuracy, with the best and runner-up methods highlighted in bold and underlined, respectively.
|Col1|Llama2-13B|Llama3-8B|
|---|---|---|
||MAWPS GSM8K MATH|MAWPS GSM8K MATH|
|zero-shot|0.835 ± 0.009 0.414 ± 0.004 0.096 ± 0.005|0.951 ± 0.004 0.820 ± 0.016 0.324 ± 0.022|
|Random Best-validate|0.816 ± 0.004 0.405 ± 0.007 0.090 ± 0.010 0.826 ± 0.001 0.410 ± 0.005 0.096 ± 0.007|0.951 ± 0.005 0.813 ± 0.003 0.330 ± 0.009 0.932 ± 0.000 0.817 ± 0.008 0.332 ± 0.008|
|TF-IDF BM25 T5 BGEM3 OpenAI|0.826 ± 0.021 0.424 ± 0.007 0.099 ± 0.006 0.815 ± 0.008 0.416 ± 0.014 0.098 ± 0.007 0.810 ± 0.004 0.426 ± 0.013 0.093 ± 0.006 0.818 ± 0.013 0.407 ± 0.004 0.100 ± 0.011 0.805 ± 0.014 0.416 ± 0.005 0.101 ± 0.002|0.945 ± 0.009 0.803 ± 0.007 0.344 ± 0.005 0.932 ± 0.003 0.805 ± 0.002 0.334 ± 0.004 0.948 ± 0.021 0.817 ± 0.002 0.330 ± 0.009 0.938 ± 0.017 0.802 ± 0.000 0.340 ± 0.005 0.965 ± 0.011 0.809 ± 0.008 0.346 ± 0.002|
|SPELL Influence IDS|0.797 ± 0.009 0.394 ± 0.006 0.085 ± 0.003 0.836 ± 0.010 0.405 ± 0.009 0.102 ± 0.000 0.839 ± 0.005 0.424 ± 0.012 0.088 ± 0.001|0.945 ± 0.005 0.821 ± 0.008 0.343 ± 0.004 0.929 ± 0.009 0.800 ± 0.015 0.333 ± 0.006 0.920 ± 0.003 0.808 ± 0.001 0.330 ± 0.001|
|LMS3 (ours)|0.854∗ ± 0.008 0.447∗ ± 0.014 0.124∗ ± 0.003|0.966 ± 0.010 0.837∗ ± 0.011 0.353∗ ± 0.002|
- OpenAI (Neelakantan et al. 2022) adopts OpenAI TextEmbedding-3-Small model for problem representation,
�����
as well as methods that do not rely on problem similarity:
- SPELL (Gonen et al. 2023) selects demonstrations by
calculating their individual perplexity.
- Influence (Nguyen and Wong 2023) divides D into multiple subsets. The preference of a demonstration is calculated by the difference in validation accuracy between
subsets that include and exclude it.
- IDS (Qin et al. 2023) iteratively selects training samples
as demonstrations based on reasoning path similarity.
����
����
����
����
����
����
#### 5.2 Performance on One-shot Reasoning
In Table 2, we present the performance of all methods in
the one-shot setting. Firstly, it can be seen that our LMS3
outperforms all baselines across all datasets, and this effect is statistically significant with p 0.05 (marked ).
_≤_ _∗_
This directly indicates that the demonstrations chosen by
our method better stimulate the LLM’s contextual learning ability. Secondly, our LMS3 is the only one that consistently provides improvements over the zero-shot setting,
while other methods exhibit certain fluctuations across different datasets. This can be attributed to our method being
designed based on a theoretical analysis of when one-shot
learning is effective (i.e., Theorem 1). These experimental
results validate the rationality, effectiveness, and strong robustness of our theoretical findings. Thirdly, we observe that
one-shot learning generally improves the backbone’s performance on the more challenging MATH dataset, but sometimes shows a decrease on other datasets. We believe this
is because the problems in MAWPS and GSM8K are relatively simple, and the LLM itself already has the capability
to solve them. Introducing additional examples in this case
might instead mislead the model’s reasoning thought.
����
Figure 3: Few-shot Answer Accuracy of Llama3-8B.
10.0
7.5
5.0
2.5
0.0
#### 5.3 Performance on Few-shot Reasoning
Now we validate our LMS3 in the k-shot scenario, with
Llama3-8B’s performances at k = 2, 3, 4 visualized in
_{_ _}_
Figure 3. Firstly, it indicates that our method remains superior across different settings, which is consistent with our
Figure 4: Distribution of Score(X) in Eq. (24).
Theorem 2, demonstrating the high applicability of our theorem to various scenarios. Secondly, as k increases, the trend
of reasoning performance varies across different datasets.
Specifically, on MAWPS and MATH, the performances of
most methods consistently improve with a higher k. However, on GSM8K, the accuracy for almost all methods declines after k = 3. This highlights the need to balance
the number and length of demonstrations, as an excessive
number of demonstrations does not necessarily lead to increased accuracy. A dataset with longer problem lengths
(i.e., GSM8K as indicated in Appendix 3) may require fewer
examples to achieve optimal performance.
#### 5.4 LMS3 Analysis
**Analysis of Scoring Function. Figure 4 presents the dis-**
tribution of Score(X) in Eq. (24) normalized by z-score,
which verifies that our Score(X) has good discriminative
power for different samples. More importantly, we observe
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|Col12|Col13|Col14|Col15|Col16|Col17|Col18|Col19|Col20|Col21|Col22|Col23|Col24|Col25|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
||||||||||||||||||||||||||
||||||+$|�' $|(||||||||& �' |$( |||||$)&�'|$(|||
||||||||||||||||||||||||||
||||||||||||||||||||||||||
||||||+$|�' $|(||||||||& �' |$(|||||$)&�'|$(|||
||||||||||||||||||||||||||
||||||||||||||||||||||||||
||||||||||||||||||||||||||
|ig .0 .5 .0 .5 .0 0 F re to as if m o s u ber ed G pl L lys tio h er|u|re|3|: M|+$ F AW|�' $ ew PS|( -|sh|o|t A 0 0 bu ti os an P en a T h tio da ate pti is F ) r m|n|s|w| e|& �' r A GSM8|$( ccu K|ra|cy B .0 0. X) lica k in ss the th mos the ons ce ger 3) ce. e 4 rma go ort|of|L|$)&�' lama MAT|$( 3-8B H|.|0 eo nd ts o w de ce ive in ths e is re ive rve|
||||||||L L|lama lama|2-13B 3-8B|||||||Llam Llam|a2-13 a3-8B|||||Llama2- Llama3-|13B 8B||
||||||||||||||||||||||||||
||||||||||||||||||||||||||
||||||||||||||||||||||||||
||||||||||||||||||||||||||
||.0 i m v o ic e n a m S e i|g ar n al t G ft b o a M s M s|ur 2, i in ly ho er er f c 8 t S o|S e d o g , d S d cu K o f|0. core 4: e us p o s M k an em ra a ac 3 Sc|5 (X) D mo sc er n co 8K = d o cy s hi An o|i n e f M n , le n . in ev a ri|st s n or A s t n st d e l n|1. ri tra ari m W ist he 3. gt ra A ic o ys g||.0 ti ng . c S tl c hi o n ta d m u|o S e a y cu s f s s i nc|n th e v n i r d d e n al t|o e co a d m a hi e o t A p io|0.5 Score(X f Sc hig ndl ries M pro cy f ghli mon es n wit ppe erfo n.|) ore h ap y, a ac ATH ve or a ght str ot h lo ndi rm Fig|1 ( p s ro , wi l s ati ne n x an ur||0 in bi cr dif p a t n , ss p ma p|li e f e h al e a ar r y re|0.5 Score( Eq. (2 ty of ases, erent rform igher l met ed to s an ily le oblem requ sents|X) 4). our t the t data ance k. H hods bal exces ad to len ire fe the|1. h re se s o an s g w d||
||n v fo|er r|of i d|fi i|Sc es ffe|o th re|re a n|( t t|X ou sa||in S pl|c e|E o s.|q re|. (2 (X Mor|4) n ) h e i|o as mp||li od an|z tl|ed by discr y, w|z-sc imin e obs|o at e||
-----
ChatGPT GPT-4
MAWPS GSM8K MATH MAWPS GSM8K MATH
zero-shot 0.906 0.800 0.482 0.941 0.954 **0.758**
Random 0.858 0.839 0.503 0.976 0.946 0.702
Best-validate 0.831 0.832 **0.519** 0.979 0.951 0.715
TF-IDF 0.895 0.820 0.514 0.975 0.947 0.724
BM25 0.901 0.828 0.510 0.987 0.953 0.691
T5 0.893 0.840 0.508 0.973 0.950 0.718
BGEM3 0.896 0.838 0.504 0.986 0.955 0.705
OpenAI 0.898 0.829 0.513 0.979 0.945 0.699
Influence 0.878 0.848 0.515 0.974 0.955 0.702
IDS 0.908 0.848 0.505 0.979 0.959 0.742
**LMS3 (ours)** **0.909** **0.862** 0.517 **0.990** **0.961** 0.752
Table 3: Generalization performance on ChatGPT/GPT4.
Llama2-13B Llama3-8B
0.850 MAWPS 0.950 MAWPS
0.825 0.925
0.840
0.450 GSM8K 0.830 GSM8K
0.820
0.400 0.810
0.120 MATH 0.350 MATH
0.100 0.340
1%5%10% 20% 40% 60% 80% 100% 0.330 1%5%10% 20% 40% 60% 80% 100%
Figure 5: Performance with varying λ. The dashed line corresponds to the result of the zero-shot setting.
that the variances of the distributions for Llama2 on GSM8K
and MATH, Llama3 on MATH, are relatively small. This indicates that the differences between samples in these cases
are not significant, which can explain why most other oneshot baselines also perform better than the zero-shot setting
in the corresponding columns of Table 2. In contrast, in other
cases (e.g., on MAWPS), the performance gap between different samples is larger, and only our LMS3 can consistently
achieve better results than zero-shot setting.
Furthermore, to validate the necessity of our proposed
demonstration rejection mechanism, we test the effects of
_λ_ = 1%, 5%, 10%, 20%, 40%, 60%, 80%, 100% . It is
_{_ _}_
noteworthy that when λ = 100%, our rejection mechanism
is essentially removed. From Figure 5, we can first observe
that when λ = 100%, the accuracy of our LMS3 sometimes
falls below that of the zero-shot results, which highlights the
necessity of our rejection mechanism. Secondly, we notice
that as λ increases, the performance of Llama2 initially rises
and then falls, while the performance of Llama3 consistently
declines. On one hand, this indicates that the strength of λ
needs to be balanced differently for various LLMs, but this
process is not complicated since the optimal λ is basically
within 10%. On the other hand, this demonstrates that our
_Sim(X) can effectively approximate the conditions in The-_
orems 1 and 2, as using it to implement the rejection mechanism can improve the model’s performance.
**Generalization Ability. One advantage of our method is**
that it reasonably considers the inference LLM when selecting demonstrations. However, it may raise concerns about
its extensibility, as it requires access to the LLM’s internal
parameters. To address this issue, we offer the demonstrations selected by LMS3 (Llama3-8B) directly to the most
advanced LLMs ChatGPT and GPT-4, compared with other
methods that do not rely on the LLM’s parameters. From
Table 3, we can see that our LMS3 still achieves nearly op
|Col1|ChatGPT|GPT-4|
|---|---|---|
||MAWPS GSM8K MATH|MAWPS GSM8K MATH|
|zero-shot Random Best-validate TF-IDF BM25 T5 BGEM3 OpenAI Influence IDS LMS3 (ours)|0.906 0.800 0.482 0.858 0.839 0.503 0.831 0.832 0.519 0.895 0.820 0.514 0.901 0.828 0.510 0.893 0.840 0.508 0.896 0.838 0.504 0.898 0.829 0.513 0.878 0.848 0.515 0.908 0.848 0.505 0.909 0.862 0.517|0.941 0.954 0.758 0.976 0.946 0.702 0.979 0.951 0.715 0.975 0.947 0.724 0.987 0.953 0.691 0.973 0.950 0.718 0.986 0.955 0.705 0.979 0.945 0.699 0.974 0.955 0.702 0.979 0.959 0.742 0.990 0.961 0.752|
Figure 6: Case Study (Case 1).
timal results, demonstrating the excellent generalization and
representativeness of our selected demonstrations. Besides,
we observe that all methods negatively impact the reasoning performance on MATH dataset when applied to GPT-4.
We attribute this to that chain-of-thought examples may no
longer be significantly beneficial for GPT-4, and future examples for GPT-4 might need to focus more on code (Chen
et al. 2023; Gao et al. 2023) or other formats.
**Case Study. We present three cases to validate the inter-**
pretability of our LMS3. Due to space limit, we show case
1 in Figure 6 and cases 2 and 3 in Appendix 5, and we omit
the solutions in the demonstrations for brevity.
For cases 1 and 2, the baselines OpenAI and Best-validate
both made errors. This indicates that considering only the
semantic similarity of demonstrations or the effectiveness of
demonstrations on the validation set alone, is insufficient. It
is essential to balance similarity and the inference stability
of demonstrations, as LMS3 does, to consistently achieve
better results compared to zero-shot setting. In case 3, we
again observe that the two baselines incorrectly answer a
problem that zero-shot got right. In contrast, LMS3’s rejection mechanism determines that the best demonstration still
has a relatively large similarity distance Sim(X) from the
test sample (ranked in the top 1.19% exceeding λ = 1%).
By automatically rejecting this demonstration and adopting
the zero-shot setting, LMS3 maintains the original performance, which verifies the necessity and effectiveness of our
proposed demonstration rejection mechanism.
### 6 Conclusion and Future Work
In this paper, we theoretically analyzed how demonstrations affected LLMs’ mathematical reasoning performance.
On this basis, we proposed a LMS3 method that balanced
LLM-oriented semantic similarity and inference stability
of demonstrations, and introduced a demonstration rejection mechanism to filter out negative situations. Experiments
showed that our method was the only one to consistently
improve the reasoning accuracy of LLMs, and our demonstrations exhibited strong generalization ability and interpretability. In the future, we will extend our method to more
NLP tasks and apply our theory to broader scenarios. Please
refer to Appendix 6 for more discussions and details.
-----
### References
Achiam, J.; Adler, S.; Agarwal, S.; Ahmad, L.; Akkaya,
I.; et al. 2023. Gpt-4 technical report. _arXiv preprint_
_arXiv:2303.08774._
An, S.; Lin, Z.; Fu, Q.; Chen, B.; Zheng, N.; Lou, J.-G.;
and Zhang, D. 2023. How Do In-Context Examples Affect
Compositional Generalization? In Proceedings of the 61st
_Annual Meeting of the Association for Computational Lin-_
_guistics (Volume 1: Long Papers), 11027–11052._
Asai, A.; Wu, Z.; Wang, Y.; Sil, A.; and Hajishirzi, H.
2024. Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection. In The Twelfth International
_Conference on Learning Representations._
Besta, M.; Blach, N.; Kubicek, A.; et al. 2024. Graph of
thoughts: Solving elaborate problems with large language
models. In AAAI, volume 38, 17682–17690.
Brown, T.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J. D.;
et al. 2020. Language models are few-shot learners. Ad_vances in neural information processing systems, 33: 1877–_
1901.
Chang, T.-Y.; and Jia, R. 2023. Data Curation Alone Can
Stabilize In-context Learning. In Proceedings of the 61st
_Annual Meeting of the Association for Computational Lin-_
_guistics (Volume 1: Long Papers), 8123–8144._
Chen, J.; Xiao, S.; Zhang, P.; et al. 2024. Bge
m3-embedding: Multi-lingual, multi-functionality, multigranularity text embeddings through self-knowledge distillation. arXiv preprint arXiv:2402.03216.
Chen, W.; Ma, X.; Wang, X.; and Cohen, W. W. 2023. Program of Thoughts Prompting: Disentangling Computation
from Reasoning for Numerical Reasoning Tasks. Transac_tions on Machine Learning Research._
Cobbe, K.; Kosaraju, V.; Bavarian, M.; Chen, M.; Jun, H.;
Kaiser, L.; Plappert, M.; Tworek, J.; Hilton, J.; Nakano, R.;
et al. 2021. Training verifiers to solve math word problems.
_arXiv preprint arXiv:2110.14168._
Dai, D.; Sun, Y.; Dong, L.; Hao, Y.; Ma, S.; Sui, Z.; and Wei,
F. 2023. Why Can GPT Learn In-Context? Language Models Secretly Perform Gradient Descent as Meta-Optimizers.
In Findings of the Association for Computational Linguis_tics: ACL 2023, 4005–4019._
Dong, Q.; Li, L.; Dai, D.; Zheng, C.; Wu, Z.; Chang, B.;
Sun, X.; Xu, J.; and Sui, Z. 2022. A survey on in-context
learning. arXiv preprint arXiv:2301.00234.
Feigenbaum, E. A.; Feldman, J.; et al. 1963. Computers and
_thought. New York McGraw-Hill._
Fletcher, C. R. 1985. Understanding and solving arithmetic
word problems: A computer simulation. Behavior Research
_Methods, Instruments, & Computers, 17(5): 565–571._
Fu, Y.; Peng, H.; Sabharwal, A.; Clark, P.; and Khot, T. 2022.
Complexity-based prompting for multi-step reasoning. In
_The Eleventh International Conference on Learning Repre-_
_sentations._
Gao, L.; Madaan, A.; Zhou, S.; Alon, U.; Liu, P.; Yang,
Y.; Callan, J.; and Neubig, G. 2023. Pal: Program-aided
language models. In International Conference on Machine
_Learning, 10764–10799. PMLR._
Gao, X.; and Das, K. 2024. Customizing Language Model
Responses with Contrastive In-Context Learning. In Pro_ceedings of the AAAI Conference on Artificial Intelligence,_
volume 38, 18039–18046.
Gonen, H.; Iyer, S.; Blevins, T.; Smith, N. A.; and Zettlemoyer, L. 2023. Demystifying Prompts in Language Models
via Perplexity Estimation. In Findings of the Association for
_Computational Linguistics: EMNLP 2023, 10136–10148._
Han, C.; Wang, Z.; Zhao, H.; and Ji, H. 2023. Explaining
emergent in-context learning as kernel regression. _arXiv_
_preprint arXiv:2305.12766._
Hendrycks, D.; Burns, C.; Kadavath, S.; Arora, A.; Basart,
S.; Tang, E.; Song, D.; and Steinhardt, J. 2021. Measuring Mathematical Problem Solving With the MATH Dataset.
In Thirty-fifth Conference on Neural Information Processing
_Systems Datasets and Benchmarks Track (Round 2)._
Koh, P. W.; and Liang, P. 2017. Understanding black-box
predictions via influence functions. In International confer_ence on machine learning, 1885–1894. PMLR._
Kojima, T.; Gu, S. S.; Reid, M.; Matsuo, Y.; and Iwasawa,
Y. 2022. Large language models are zero-shot reasoners. Advances in neural information processing systems, 35:
22199–22213.
Koncel-Kedziorski, R.; Roy, S.; Amini, A.; et al. 2016.
MAWPS: A math word problem repository. In NAACL-HLT,
1152–1157.
Lewkowycz, A.; Andreassen, A.; Dohan, D.; et al. 2022.
Solving quantitative reasoning problems with language
models. Advances in Neural Information Processing Sys_tems, 35: 3843–3857._
Lightman, H.; Kosaraju, V.; Burda, Y.; et al. 2023. Let’s Verify Step by Step. In The Twelfth International Conference on
_Learning Representations._
Ling, R. F. 1984. Residuals and influence in regression.
Liu, J.; Huang, Z.; Zhai, C.; and Liu, Q. 2023. Learning
by applying: A general framework for mathematical reasoning via enhancing explicit knowledge learning. In Proceed_ings of the AAAI Conference on Artificial Intelligence, vol-_
ume 37, 4497–4506.
Liu, J.; Shen, D.; Zhang, Y.; Dolan, W. B.; Carin, L.; and
Chen, W. 2022. What Makes Good In-Context Examples for
GPT-3? In Proceedings of Deep Learning Inside Out (Dee_LIO 2022): The 3rd Workshop on Knowledge Extraction and_
_Integration for Deep Learning Architectures, 100–114._
Luo, H.; Sun, Q.; Xu, C.; Zhao, P.; Lou, J.; Tao, C.; Geng,
X.; Lin, Q.; Chen, S.; and Zhang, D. 2023a. Wizardmath: Empowering mathematical reasoning for large language models via reinforced evol-instruct. arXiv preprint
_arXiv:2308.09583._
Luo, M.; Xu, X.; Dai, Z.; Pasupat, P.; Kazemi, M.; Baral,
C.; Imbrasaite, V.; and Zhao, V. Y. 2023b. Dr. icl:
Demonstration-retrieved in-context learning. arXiv preprint
_arXiv:2305.14128._
-----
Meta, A. 2024. Introducing meta llama 3: The most capable
openly available llm to date. Meta AI.
Min, S.; Lyu, X.; Holtzman, A.; Artetxe, M.; Lewis, M.;
Hajishirzi, H.; and Zettlemoyer, L. 2022. Rethinking the
Role of Demonstrations: What Makes In-Context Learning
Work? In Proceedings of the 2022 Conference on Empirical
_Methods in Natural Language Processing, 11048–11064._
Neelakantan, A.; Xu, T.; Puri, R.; Radford, A.; Han, J. M.;
Tworek, J.; Yuan, Q.; Tezak, N.; Kim, J. W.; Hallacy, C.;
et al. 2022. Text and code embeddings by contrastive pretraining. arXiv preprint arXiv:2201.10005.
Nguyen, T.; and Wong, E. 2023. In-context example selection with influences. arXiv preprint arXiv:2302.11042.
Pan, J.; Gao, T.; Chen, H.; and Chen, D. 2023. What InContext Learning” Learns” In-Context: Disentangling Task
Recognition and Task Learning. In The 61st Annual Meeting
_Of The Association For Computational Linguistics._
Peng, K.; Ding, L.; Yuan, Y.; Liu, X.; Zhang, M.; Ouyang,
Y.; and Tao, D. 2024. Revisiting demonstration selection strategies in in-context learning. _arXiv preprint_
_arXiv:2401.12087._
Qin, C.; Zhang, A.; Dagar, A.; and Ye, W. 2023. In-context
learning with iterative demonstration selection. _arXiv_
_preprint arXiv:2310.09881._
Raffel, C.; Shazeer, N.; Roberts, A.; Lee, K.; Narang, S.;
Matena, M.; Zhou, Y.; Li, W.; and Liu, P. J. 2020. Exploring the limits of transfer learning with a unified text-to-text
transformer. Journal of machine learning research, 21(140):
1–67.
Robertson, S.; Zaragoza, H.; et al. 2009. The probabilistic
relevance framework: BM25 and beyond. Foundations and
_Trends® in Information Retrieval, 3(4): 333–389._
Sorensen, T.; Robinson, J.; Rytting, C.; Shaw, A.; Rogers,
K.; Delorey, A.; Khalil, M.; Fulda, N.; and Wingate, D.
2022. An Information-theoretic Approach to Prompt Engineering Without Ground Truth Labels. In Proceedings of the
_60th Annual Meeting of the Association for Computational_
_Linguistics (Volume 1: Long Papers), 819–862._
Touvron, H.; Martin, L.; Stone, K.; Albert, P.; Almahairi, A.;
Babaei, Y.; Bashlykov, N.; Batra, S.; Bhargava, P.; Bhosale,
S.; et al. 2023. Llama 2: Open foundation and fine-tuned
chat models. arXiv preprint arXiv:2307.09288.
Trinh, T. H.; Wu, Y.; Le, Q. V.; He, H.; and Luong, T. 2024.
Solving olympiad geometry without human demonstrations.
_Nature, 625(7995): 476–482._
Van, M.-H.; Wu, X.; et al. 2024. In-Context Learning
Demonstration Selection via Influence Analysis. _arXiv_
_preprint arXiv:2402.11750._
Wei, J.; Wang, X.; Schuurmans, D.; Bosma, M.; Xia, F.;
Chi, E.; Le, Q. V.; Zhou, D.; et al. 2022. Chain-ofthought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems,
35: 24824–24837.
Wu, Z.; Wang, Y.; Ye, J.; and Kong, L. 2023. Self-Adaptive
In-Context Learning: An Information Compression Perspective for In-Context Example Selection and Ordering. In Pro_ceedings of the 61st Annual Meeting of the Association for_
_Computational Linguistics (Volume 1: Long Papers), 1423–_
1436.
Yan, J.; Xu, J.; Song, C.; Wu, C.; Li, Y.; and Zhang, Y. 2024.
Understanding In-Context Learning from Repetitions. In
_The Twelfth International Conference on Learning Repre-_
_sentations._
Yao, S.; Yu, D.; Zhao, J.; et al. 2024. Tree of thoughts: Deliberate problem solving with large language models. Ad_vances in Neural Information Processing Systems, 36._
Ye, J.; Wu, Z.; Feng, J.; Yu, T.; and Kong, L. 2023. Compositional exemplars for in-context learning. In International
_Conference on Machine Learning, 39818–39833. PMLR._
Yu, L.; Jiang, W.; Shi, H.; Jincheng, Y.; Liu, Z.; Zhang,
Y.; Kwok, J.; Li, Z.; Weller, A.; and Liu, W. 2024. MetaMath: Bootstrap Your Own Mathematical Questions for
Large Language Models. In The Twelfth International Con_ference on Learning Representations._
Yue, X.; Qu, X.; Zhang, G.; Fu, Y.; Huang, W.; Sun, H.; Su,
Y.; and Chen, W. 2024. MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning. In The
_Twelfth International Conference on Learning Representa-_
_tions._
Zhang, D.; Wang, L.; et al. 2020. The Gap of Semantic Parsing: A Survey on Automatic Math Word Problem Solvers.
_IEEE Transactions on Pattern Analysis and Machine Intelli-_
_gence, 42(9): 2287–2305._
Zhang, P.; Xiao, S.; Liu, Z.; Dou, Z.; and Nie, J.-Y. 2023.
Retrieve anything to augment large language models. arXiv
_preprint arXiv:2310.07554._
-----
### Paper Checklist
This paper
- Includes a conceptual outline and/or pseudocode description of AI methods introduced. (Yes)
- Clearly delineates statements that are opinions, hypothesis, and speculation from objective facts and results.
(Yes)
- Provides well marked pedagogical references for lessfamiliare readers to gain background necessary to replicate the paper. (Yes)
Does this paper make theoretical contributions? (Yes)
If yes, please complete the list below.
- All assumptions and restrictions are stated clearly and
formally. (Yes)
- All novel claims are stated formally (e.g., in theorem
statements). (Yes)
- Proofs of all novel claims are included. (Yes)
- Proof sketches or intuitions are given for complex and/or
novel results. (Yes)
- Appropriate citations to theoretical tools used are given.
(Yes)
- All theoretical claims are demonstrated empirically to
hold. (Yes)
- All experimental code used to eliminate or disprove
claims is included. (Yes)
Does this paper rely on one or more datasets? (Yes)
If yes, please complete the list below.
- A motivation is given for why the experiments are conducted on the selected datasets. (Yes)
- All novel datasets introduced in this paper are included
in a data appendix. (Yes)
- All novel datasets introduced in this paper will be made
publicly available upon publication of the paper with a
license that allows free usage for research purposes. (Yes)
- All datasets drawn from the existing literature (potentially including authors’ own previously published work)
are accompanied by appropriate citations. (Yes)
- All datasets drawn from the existing literature (potentially including authors’ own previously published work)
are publicly available. (Yes)
- All datasets that are not publicly available are described
in detail, with explanation why publicly available alternatives are not scientifically satisficing. (NA)
Does this paper include computational experiments? (Yes)
If yes, please complete the list below.
- Any code required for pre-processing data is included in
the appendix. (Yes)
- All source code required for conducting and analyzing
the experiments is included in a code appendix. (Yes)
- All source code required for conducting and analyzing
the experiments will be made publicly available upon
publication of the paper with a license that allows free
usage for research purposes. (Yes)
- All source code implementing new methods have comments detailing the implementation, with references to
the paper where each step comes from. (Yes)
- If an algorithm depends on randomness, then the method
used for setting seeds is described in a way sufficient to
allow replication of results. (Yes)
- This paper specifies the computing infrastructure used
for running experiments (hardware and software), including GPU/CPU models; amount of memory; operating
system; names and versions of relevant software libraries
and frameworks. (Yes)
- This paper formally describes evaluation metrics used
and explains the motivation for choosing these metrics.
(Yes)
- This paper states the number of algorithm runs used to
compute each reported result. (Yes)
- Analysis of experiments goes beyond single-dimensional
summaries of performance (e.g., average; median) to include measures of variation, confidence, or other distributional information. (Yes)
- The significance of any improvement or decrease in
performance is judged using appropriate statistical tests
(e.g., Wilcoxon signed-rank). (Yes)
- This paper lists all final (hyper-)parameters used for each
model/algorithm in the paper’s experiments. (Yes)
- This paper states the number and range of values tried
per (hyper-) parameter during development of the paper,
along with the criterion used for selecting the final parameter setting. (Yes)
-----
| [
"Jiayu, Liu",
"Chaokun, Wang",
"Zhenya, Huang",
"Enhong, Chen",
"Xunpeng, Huang",
"Chengxiang, Zhai"
] | 2024-12-11T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2412.12157 | https://arxiv.org/abs/2412.12157 | https://www.semanticscholar.org/paper/f801a79de5aa817bfaac2f0aaab994f47cc2594a |
What Makes Large Language Models Reason in (Multi-Turn) Code Generation? | Prompting techniques such as chain-of-thought have established themselves as a popular vehicle for improving the outputs of large language models (LLMs). For code generation, however, their exact mechanics and efficacy are under-explored. We thus investigate the effects of a wide range of prompting strategies with a focus on automatic re-prompting over multiple turns and computational requirements. After systematically decomposing reasoning, instruction, and execution feedback prompts, we conduct an extensive grid search on the competitive programming benchmarks CodeContests and TACO for multiple LLM families and sizes (Llama 3.0 and 3.1, 8B, 70B, 405B, and GPT-4o). Our study reveals strategies that consistently improve performance across all models with small and large sampling budgets. We then show how finetuning with such an optimal configuration allows models to internalize the induced reasoning process and obtain improvements in performance and scalability for multi-turn code generation. | This study investigates the effects of a wide range of prompting strategies with a focus on automatic re-prompting over multiple turns and computational requirements, and reveals strategies that consistently improve performance across all models with small and large sampling budgets. | ## WHAT MAKES LARGE LANGUAGE MODELS REASON
### IN (MULTI-TURN) CODE GENERATION?
**Kunhao Zheng[1][,][2][∗], Juliette Decugis[1][∗], Jonas Gehring[1], Taco Cohen[1],**
**Benjamin Negrevergne[2], Gabriel Synnaeve[1]**
1Meta AI (FAIR), 2Paris Dauphine University - PSL
{kunhao, jdecugis, gab}@meta.com
ABSTRACT
Prompting techniques such as chain-of-thought have established themselves as a
popular vehicle for improving the outputs of large language models (LLMs). For
code generation, however, their exact mechanics and efficacy are under-explored.
We thus investigate the effects of a wide range of prompting strategies with a focus
on automatic re-prompting over multiple turns and computational requirements.
After systematically decomposing reasoning, instruction, and execution feedback
prompts, we conduct an extensive grid search on the competitive programming
benchmarks CodeContests and TACO for multiple LLM families and sizes (Llama
3.0 and 3.1, 8B, 70B, 405B, and GPT-4o). Our study reveals strategies that
consistently improve performance across all models with small and large sampling
budgets. We then show how finetuning with such an optimal configuration allows
models to internalize the induced reasoning process and obtain improvements in
performance and scalability for multi-turn code generation.
1 INTRODUCTION
The field of automatic code generation has made significant progress, particularly with the development of specialized Large Language Models (LLMs) (Chen et al., 2021; Li et al., 2022; Rozière
et al., 2024; OpenAI, 2023; AI @ Meta, 2024). While these models have demonstrated proficiency
in generating simple functions across various programming languages, there is still considerable
room for improvement in their ability to tackle more complex algorithmic reasoning tasks, such as
those found in competitive programming benchmarks like CodeContests (Li et al., 2022). Current
state-of-the-art approaches either rely on model ensembling and massive single-turn sampling (AlphaCode Team, 2023) or employ complex structured prompt chains for planning, editing and debugging
(Ridnik et al., 2024; Islam et al., 2024). In contrast, multi-turn code generation strikes a balance
between single-turn approaches and prompt chains, where code is built upon previous outputs in a
dialog-like structure. This approach is motivated by applications such as LLM-based agents (Yao
et al., 2023b), where models are tasked with decision-making and interacting with environments. In
code generation, multi-turn approaches have primarily been explored on simple benchmarks or in
small sample regimes due to their association with self-repair techniques (Olausson et al., 2024; Chen
et al., 2024; Shinn et al., 2023; Zhong et al., 2024).
In this paper, we systematically deconstruct the components of previous research on prompting
techniques and propose a unified framework for multi-turn code generation. Our objective is to
establish a comprehensive and strong baseline, and to explore behavior and limitations across various
sample regimes. Our focus on competition-level coding benchmarks and sample budgets is motivated
as follows: (1) Popular methods such as chain of thought (Wei et al., 2022, CoT) yield improvements
on reasoning-heavy tasks. However, they are designed to elicit reasoning traces for maximizing
single-turn performance and are not inherently multi-turn. Competition-level benchmarks require
algorithmic reasoning and thus provide an ideal testbed to evaluate whether CoT techniques can be
extended beyond single-turn reasoning. (2) Recent studies suggest that the performance gains from
self-repair are often modest when considering their generation cost (Olausson et al., 2024) and that
repeated single-turn sampling serves as a strong baseline (Brown et al., 2024). As such, the trade-off
_∗Equal contribution._
-----
1. s
stri
2. s
bas
com
**def**
**.**
**def**
**n**
**s**
**s**
**sor**
**cus**
**Code**
Problem **<Code>** **<Code>** **n, m =** **map(int, input().split())**
**words = [input().strip() for _ in** **range(n)]**
Traceback if fails **words.sort(-ord(s[i])) keyfor=lambda i in** **rangex: [((-1ord, m(s[i+1)])-1]), ord(s[i])) if i %** **2** **else (ord(s[i-1]),**
...
Problem Instruct. **Inst. Response** **<Code>** **Reason. Response**
**+** Prompt **<Code>** **def** **custom_sort(s):**
Reason. **# Custom sorting key for asc-desc-ending order.**
Prompt This function returns a tuple where the first element is a
if fails tuple of characters at odd indices in ascending order and
**Reason. Response** Feedback the second element is a tuple of characters at even indices in descending order.
**Problem Statement** **Reason. Prompt**
**Inst. Response**
Andi and Budi were given an assignment to tidy up their Write helper functions docstring and explain. **def** **custom_sort(s):**
bookshelf of n books. Each book is represented by the book
title — a string s_i numbered from 1 to n, each with length **...**
m. Andi really wants to sort the book lexicographically **Inst. Prompt** **def** **main():**
ascending, while Budi wants to sort it lexicographically Write comment before each line. **# Read the number of book titles (n) and the**
descending. Settling their fight, they decided to combine **# length of each title (m)**
their idea and sort it asc-desc-endingly.
…Example **Execution Feedback** **n, m # Read each book title and store it=** **map(int, input().strip().split())**
Input Your code failed the following tests: - input **strings = [input().strip() for _ in** **range(n)]**
5 2\nAA\nAB\nBB\nBA\nAZ `5 2\nAA\nAB\nBB\nBA\nAZ` **# Sort the book titles based on the**
Output5 2 1 3 4… Traceback:… Your code is wrong. **# "asc-desc-endingly" criteria ...**
Figure 1: Our framework for evaluating LLM multi-turn code generation techniques. Top: In
the default multi-turn setting, given a programming problem, the model generates a code solution,
interacts with the runtime environment to gather execution feedback and retries in case of failure.
**Bottom: On top of the default setting, we gather reasoning (Reason.) prompts, instruction (Inst.)**
prompts, and execution feedback prompts. The problem statement is augmented with a reasoning
prompt. After generating an answer to the reasoning prompt, an instruction prompt determines how
program code should be generated. The execution feedback prompts vary in granularity, ranging from
a binary pass or fail indicator to detailed tracing information.
between single-turn and multi-turn approaches, and the optimal allocation of resources between them,
remains under-explored.
Our framework (Figure 1) enables mix-and-match combinations of single- and multi-turn code
generation and chain-of-thought (CoT) techniques[1]: prompts that induce reasoning, such as a
predicting problem attributes or writing natural language solutions first, and instructions that prompt
different programming styles such as including comments or helper functions. Finally, we integrate
_execution feedback from intermediate solutions to allow for code repair. We conduct a comprehensive_
experimental survey across different benchmarks, LLM families and sizes, as well as sample regimes.
Our analysis yields several key insights:
1. In the single-turn setting, combining reasoning prompts and instruction prompts achieves
the best performance, and is more beneficial on larger models or harder problems. We also
identify CoTs that degrade performance (Section 5.1).
2. The multi-turn setting alone brings modest gains and is sometimes worse than its single-turn
counterpart under equal sampling budgets. The combination with CoT provides a significant
performance boost on all models we study. Interestingly, detailed execution feedback
prompts do not always translate to improved performance (Section 5.2). We show that this
can be attributed to reduced diversity of generated programs which results in performance
drops for large sample budgets.
3. LLMs can be instilled with reasoning behavior by finetuning on multi-turn CoT data (Section
5.3). The resulting model surpasses our best prompting configurations even without explicitly
asking for CoTs during inference.
1We use the term "chain of thought" to refer to a broad family of prompting methods eliciting intermediate
steps before or during code generation.
-----
2 BACKGROUND
2.1 SINGLE-TURN VS. MULTI-TURN GENERATION: PROBLEM SETTING
We assume a coding problem D = {s, u, t}, where s is the problem statement in natural language
(e.g. see Figure 1), u is a set of public tests, and t is a set of private tests. A given code sample c is
considered correct if it passes all tests, or incorrect otherwise. Let π denote an LLM that is able to
produce a code sample c for D from a user prompt p which includes the problem statement s. In the
single-turn setting we thus obtain a code sample c ∼ _π(· | p)._
In multi-turn code generation, we can distinguish between a Natural-Language-to-Code (NL →
Code) task in the first turn and Code-to-Code (Code → Code) generation in subsequent turns. For
a given problem, we generate a sequence of intermediary code samples c1, . . ., cT rather than just
one. After each turn i, the code sample ci is fed back into the model π together with an execution
_feedback prompt to obtain the next sample ci+1. This process is repeated T times until we either pass_
all public tests or until a maximum number of turns N is reached. More formally, we can obtain
every intermediary sample ci, including the final code solution cT, as follows:
_ci ∼_ _π(· | p1, c1, p2, . . ., ci−1, pi)._
In this setting, the first prompt p1 is the initial user prompt including the problem statement, and each
_pi for i > 1 is an execution feedback prompt containing the runtime result with error information or_
traceback optionally attached.
In the remainder of this study, the sequence (p1, c1, ..., pT, cT ) is denoted a trajectory, and the final
code sample cT is called the submission. Only the code sample cT is tested against the private tests t
for correctness (i.e. intermediary code samples ci will only be tested against public tests u). Note that
we sample not just one but several trajectories in parallel, starting with the same initial prompt p1.
2.2 EVALUATION METRICS
We are interested in finding a correct solution to a given pro- 48
gramming problem with a fixed budget, i.e., with a fixed number 46
of code samples. For estimating the success rate of generated 44
code samples, pass@k is a widely used metric (Chen et al., 42
2021). For a problem P and given a budget of k samples, 40
[email protected]., that it passes all tests.k is the expectation that at least one sample is correct, 3634
**Limitations of pass@k** Pass@k ignores computational re- 0 2 4 6 8 10
quirements and thus puts single-turn evaluations at a disadvan- **Pass 10@100**
tage. In multi-turn settings, solutions are obtained via several 50
generations (i.e., LLM calls) and hence at a higher cost, ren- 49.5
dering these two setups not directly comparable (Kapoor et al., 49
2024). 48.5
In this study, we opt to measure performance via pass n@k (Li 47.5
et al., 2022) rather than pass@k for a fair comparison of tech- 47
niques. Pass n@k estimates the success rate of a model π on 46.5
a problem P using k generations but at most n submissions; 0 2 4 6 8 10
it is the expectation that out of n submissions one of them is
**Pass @10**
48
46
44
42
40
38
36
34
0 2 4 6 8 10
**Pass 10@100**
50
49.5
49
48.5
48
47.5
47
46.5
0 2 4 6 8 10
Max Turns
correct (Appendix A). Following Li et al. (2022), we select n
Figure 2: Scaling number of turns
submissions based on public test performance. Note that for
**is not compute optimal. Pass@10**
_n = k, both metrics are equivalent. For each benchmark, we_
(Top) and pass 10@100 (Bottom)
report the average pass n@k or pass@k over all problems.
on CodeContests test set when in
Figure 2 compares pass@k and pass n@k when measuring creasing the number of turns with
performance in a multi-turn setting. Pass@10 (Top) keeps Llama 3.1 70B.
increasing if we increase the maximum number of turns. However, pass 10@100 (Bottom) shows
that compute optimality is lost after 3 turns. Given a budget of 100 samples with 10 programs selected
as submissions, the optimal allocation of compute is obtained by generating trajectories with 3 turns
-----
**_SINGLE-TURN_** **_MULTI-TURN_**
**Reasoning Prompts** **Instruction Prompts**
**(NL → NL)** **(NL → Code)** **Feedback Prompts**
CoA [1], Self-discover [2], Parsel [4], LLM assisted code Learn to Debug (LDB) [6], NeXT [7], Self-Debugging [8],
Decomposed prompting [3] cleaning [5] CodeChain [9], Self-refine [10], REFINER [11], CodeRepair [12],
General to Specific Prompting [13]
Natural Language Code Execution Feedback
Explain input output pairs Generate a naive solution. Your code is wrong.
Rate the problem difficulty Propose a clever algorithm. You failed tests. Expected: 5, got: 2
Write a solution in natural Describe the necessary You failed the tests: for input 1 expected: 5 got: 2
language. intermediate variables. You passed some tests: for input 0 expected: 8 and got: 8
Reflect on the problem, and Write the docstring and
describe it in your own words signatures of helper functions Here are intermediate values, analyze and fix your code.
… … …
Figure 3: Prompting space explored in our survey. We explore chain of thought prompts at
three different levels: before the first code generation (reasoning prompts), with code generation
(instruction prompts), and after the first code generation (execution feedback). The corresponding
works from the single-turn and multi-turn reasoning and code generation literature are: [1] Gao et al.
(2024), [2] Zhou et al. (2024), [3] Khot et al. (2023), [4] Zelikman et al. (2023), [5] Jain et al. (2024),
[6] Zhong et al. (2024), [7] Ni et al. (2024), [8] Chen et al. (2024), [9] Le et al. (2024), [10] Madaan
et al. (2024), [11] Paul et al. (2024), [12] Tang et al. (2024), [13] Li et al. (2023a).
at most. As such, throughout this paper, we favor pass n@k and report pass@k only when comparing
single-turn results exclusively.
3 PROMPTING AND FEEDBACK SPACE
We map the space of prompting techniques studied in our experimental survey in Figure 3. As CoT
can intervene at different times in code generation, we categorize reasoning prompts (NL → NL) that
elicit understanding of the problem before code generation, and instruction prompts (NL → Code)
that guide the code output to enhance readability and modularity. These prompts can be applied in
single-turn and multi-turn approaches.
In the multi-turn setting, we also introduce execution feedback prompts directly harvested from the
runtime environment, serving as additional information for the model to self-repair within turns. We
aim to determine the type of feedback which most effective on competitive programming benchmarks
in the large sample regime. We thus evaluate several types of feedback, ranging in granularity:
- Binary feedback: A simple pass/fail indicator.
- Failed tests feedback: Provides expected and actual values for failed unit tests, along with
tracebacks if any runtime errors are encountered.
- Failed & passed tests feedback: Expands on failed tests feedback by also including input/output
information for passing tests.
- LDB feedback (Zhong et al., 2024): Offers debugger information, printing intermediate variable
values and separating the code into blocks. The model must identify at which block the code failed
and attempt to fix it.
CoT and execution feedback are incorporated into the generation through specific prompts as illustrated in Figure 1 (Bottom). As we will show in Section 5.2, different types of execution feedback
induce different multi-turn behavior that can be classified as either exploratory or exploitative.
-----
4 EXPERIMENTAL SETTING
**Models** We perform experiments with the Llama Instruct series of LLMs, including Llama 3.0 and
3.1, 8B and 70B models (AI @ Meta, 2024). We use Llama 3.1 405B and GPT-4o in small sampling
regimes only due to compute constraints.
**Single-turn** Our grid search comprises 8 reasoning prompts and 6 instruction prompts, detailed in
Appendix D. The reasoning prompts elicit intermediate steps either in natural language or with partial
code. The instruction prompts either increase code readability ("describe"), break down solution into
modular code ("modularity") or bias the type of solution ("solution"). Although we perform one more
step of LLM inference for the reasoning prompts, we do not consider it an additional turn as our
study compares the number of code attempts per problem and effect of adding different types of extra
tokens. We argue that this is equivalent to a single LLM call which groups all the reasoning prompts
together, modulo number of LLM forward passes. We generate with nucleus sampling (Holtzman
et al., 2020, top-p=0.95) and a temperature of 1.0 to encourage output diversity.
**Multi-turn** When performing multiple consecutive attempts at solving a coding problem, we set
the code attempt limit to 3; this is motivated by the multi-turn results in Section 2.2 which reveal
three turns as compute-optimal. We take the best reasoning prompts from the single-turn setting and
combine them for up to 3 reasoning steps before code generation. We also introduce the CoT-retry
setup which allows for adaptive inference budget based on problem difficulty. In the first turn, we
omit CoT prompts. If the first solution fails, i.e., the problem is harder, we prompt the LLM with a
combination of execution feedback and a reasoning prompt. We employ a different prompt for each
turn (see Appendix D.3). We also ablate different granularities of execution feedback. We do not
include CoT prompts in this feedback comparison to isolate the effect of different feedback types.
**Rejection Sampling Finetuning** With the Llama 3.1 70B model, we use the CoT-retry strategy to
generate 3-turn trajectories on the CodeContests training set. We filter out trajectories with incorrect
final code and perform supervised finetuning on the resulting data (details in Appendix B.2).
**Benchmarks** We conduct our experiments on two competitive coding benchmarks in the zero-shot
setting: (1) CodeContests (Li et al., 2022) contains 13k programming problems in the training set and
117/165 problems in the valid/test set. Each problem contains public tests, private tests, and generated
tests. We use public tests to provide execution feedback in the multi-turn setting and use all available
tests to evaluate the final submission. (2) TACO (Li et al., 2023b) is a collection of problems sourced
from CodeContests, APPS (Hendrycks et al., 2021), and various programming contest platforms.
The test set is split into 5 distinct difficulty levels: easy, medium, medium-hard, hard, and very-hard,
with each level comprising 200 problems. This stratification allows us to examine the performance of
different prompting strategies across difficulty levels. We use the first test case as the public tests.
5 RESULTS
In this section, we first present model performance with the respective best CoT variants in Table 1
and 2 and then conduct a series of detailed experiments to better understand the performance impact of
individual prompting methods. We structure our presentation by key findings outlined in Introduction.
5.1 SINGLE-TURN SETTING: COT WORKS BEST FOR HARD PROBLEMS, LARGE MODELS, HIGH
SAMPLING
We first investigate the impact of various CoT prompting strategies on models in the single-turn
setting. There will be no execution feedback prompts. Therefore, our grid search involves searching
in the space of reasoning prompts (NL → NL) and instruction prompts (NL → Code).
**Reasoning and instruction prompts can work together.** We first compare the effect of various
_reasoning prompts, instruction prompts as well as combinations of both. Synthesized results are_
presented in Table 3, and we refer to Appendix C.1 for the complete set of experiments that led to
Table 3. An interesting observation is that even the best performing reasoning and instruction prompts
for pass@100 can decrease model performance in small sampling regimes (pass@1). Although
_reasoning prompts provide larger gain than instruction prompts (with the exception of Llama 3.1_
70B), combining both results in the best performance.
-----
Table 1: Up to +10% with multi-turn CoT. Pass n@k (%) on CodeContests test set with high
temperature (1.0) and large sampling budget. We use a maximum of 3 code attempts (i.e. max 3
turns) in the multi-turn setting with the "failed tests" feedback. The pass n@k is calculated from 200
trajectories for both single-turn and multi-turn settings. We also report the pass rates for Llama 3.1
70B after Rejection Sampling Fine-tuning (RFT) (Section 5.3).
CodeContests / Test
Model Variants
1@3 10@30 33@100 100@300
|Llama 3.0 8B + CoT + Multi-turn + Multi-turn CoT|2.9 8.0 12.6 - 3.4 11.7 17.3 - +0.5 +3.7 +4.7 2.4 8.0 12.8 16.7 −0.5 +0.0 +0.2 2.8 9.8 14.9 19.4 −0.1 +1.8 +2.3|
|---|---|
|Llama 3.0 70B + CoT + Multi-turn + Multi-turn CoT|9.6 18.9 23.1 - 10.4 26.0 33.0 - +0.8 +7.1 +9.9 10.1 21.0 26.7 32.7 +0.5 +2.1 +3.6 11.1 26.5 34.3 40.4 +1.5 +7.6 +11.2|
|---|---|
|Llama 3.1 8B + CoT + Multi-turn + Multi-turn CoT|7.7 18.2 23.8 - 8.0 19.5 26.1 - +0.3 +1.3 +2.3 7.0 18.8 24.5 30.4 −0.7 +0.6 +0.7 6.9 19.4 26.0 31.5 −0.8 +1.2 +2.2|
|---|---|
|Llama 3.1 70B + CoT + Multi-turn + Multi-turn CoT|24.1 42.3 49.8 - 26.4 47.8 54.8 - +2.3 +5.5 +5.0 24.1 43.8 51.6 56.2 +0.0 +1.5 +1.8 27.7 48.4 55.3 59.6 +3.6 +6.1 +5.5|
|---|---|
|Llama 3.1 70BRFT + Multi-turn|26.2 45.1 50.9 - 29.7 50.5 57.2 61.1 +3.5 +5.4 +6.3|
|---|---|
Table 2: Benchmarking of CoT across models: GPT-4o and Llama. Pass 1@1 (%) and pass 1@3
(%) with low temperature (0.2). As models become more capable, repeated sampling surpasses a
straightforward extension to multi turn (e.g. GPT-4o) or single-turn CoT (e.g. Llama 3.1 405B). A
tailored multi-turn CoT, however, improves pass 1@3 performance across all models.
GPT-4o Llama 3.1 70B Llama 3.1 405B
Variants
1@1 1@3 1@1 1@3 1@1 1@3
Single-turn 17.0 27.6 23.2 27.3 27.8 32.9
+ CoT 25.5+8.5 29.0+1.4 25.5+2.3 28.9+1.6 25.1 2.7 31.8 1.1
_−_ _−_
+ Multi-turn - 23.1 4.5 - 29.5+2.2 - 35.4+2.5
_−_
+ Multi-turn CoT - **31.5+3.9** - **31.5+4.2** - **40.1+7.2**
Table 3: Combining reasoning and instruction works best as compared to each individually for
single-turn CodeContests test set (chosen based on pass@100 performance per model). In the best
categories, results worse than the baseline are underlined.
Llama 3.0 8B Llama 3.0 70B Llama 3.1 8B Llama 3.1 70B
pass@1 pass@100 pass@1 pass@100 pass@1 pass@100 pass@1 pass@100
Baseline 1.6 12.3 3.8 23.8 3.8 22.8 16.7 48.9
Worst reasoning 1.4 12.9 5.7 21.8 4.0 23.4 15.6 47.4
Worst instruction 1.4 11.3 3.4 25.1 3.7 20.9 14.9 48.4
Worst Combination 1.4 11.8 5.6 21.0 2.9 21.1 13.2 43.5
Best reasoning 1.8 15.7 7.0 30.4 4.1 25.7 15.7 52.2
Best instruction 1.3 13.5 5.5 29.6 3.6 24.6 16.8 53.8
Best Combination 1.5 **17.3** 5.3 **33.1** 4.0 **26.1** 16.1 **54.1**
**CoT is most helpful for large models.** With the smaller Llama 3.0 8B and Llama 3.1 8B, we
observe from Table 3 that the best combination of reasoning and instruction prompts provides
relatively samll gains of 5.0% and 3.3% pass@100 on the CodeContests test set compared to the
-----
EASY MEDIUM MEDIUM_HARD HARD VERY_HARD
4
8
30 15 6
3
25 6
20 10 4 2
pass@k pass@k pass@k4 pass@k pass@k
15
5 2 1
10 2
0
10[0] 10[1] 10[2] 10[0] 10[1] 10[2] 10[0] 10[1] 10[2] 10[0] 10[1] 10[2] 10[0] 10[1] 10[2]
k k k k k
Best CoT No CoT Worst CoT
Figure 4: CoT helps most on hard examples. From a set of 8 reasoning and 6 instruction prompts
commonly used on competitive coding benchmarks, we extract the pass rate of the best and worst
prompts amongst all 69 = (8 + 1) × (6 + 1) combinations (including no reasoning or no instruction)
for Llama 3.0 8B. We compare on different difficulty split of the TACO dataset. The relative gain
from a tailored CoT increases with problem difficulty and sampling size.
improvements of 9.3% and 5.2% from the corresponding 70B models. Interestingly, we found that
not all sets of prompts are beneficial. the worst combination degrades the pass@100 of Llama 3.1
70B by up to 5.4%. CoT makes performance worse if the model fails to follow the instructions or
makes the LLM propose a sub-optimal plan. Sub-optimal plans are usually brute force approaches to
solve the problem which do not fit the time limits constraint (see Appendix E for an example).
**CoT is most helpful for harder problems.** With the TACO dataset, which provides a difficulty
split, we can observe that CoT does help smaller models on harder problems. Figure 4 demonstrates
that the gap between baseline performance (No CoT) and the best reasoning and instruction prompt
combination widens with problem difficulty. For example, the pass@100 of Llama 3.0 8B nearly
doubles with CoT on the very-hard test split (2.1% → 3.9%). We show in Appendix C.3 that this
observation generalizes to Llama 3.1 8B and 70B model.
**Prompt efficacy is model and sample size dependent.** No singular reasoning and instruction
combinations works best across sampling sizes and models (see Appendix C.2 for detailed analysis).
_Reasoning prompts that simplify the problem (e.g., self-reflection, explain input-output pairs) benefit_
smaller models (8B models) whereas larger models (70B, 405B, GPT-4o) gain most from generating
parts of the solution (e.g., write function docstrings). "Solution"-based instruction prompts are the
most efficient across models specifically for the Llama 3.1 series as shown in Figure 5.
**Pass@1** **Pass@100**
**describe**
**2** **2**
**modularity**
**solution**
**0** **0**
**−2** **−2**
**Δ from baseline**
Figure 5: Solution-based instruction prompts work best across Llama 3.1 models. We separate
_instruction prompts into "describe" (e.g., add comments, imports), "modularity" (e.g., add helper_
functions) and "solution"(e.g., write a naive solution, propose a clever algorithm). The performance
difference (∆) is normalized with respect to the baseline and standard deviation per pass rate.
5.2 MULTI-TURN SETTING: SELF-REPAIR LIMITED WITHOUT COT AND PROPER FEEDBACK
We summarize our multi-turn results in Table 1. With a fixed number of samples, i.e., k in pass n@k,
multi-turn alone provides modest gains only (usually less than +2%) and sometimes even reduces
pass 1@3 performance compared to drawing independent samples in single-turn mode. Notably,
this is the case for smaller models (Llama 3.0 and 3.1 8B). In this section, we take a closer look at
performance drops in the multi-turn setting and explore methods that can take advantage of accessing
previous wrong solutions.
-----
Llama3.1-8B Llama3.1-70B
2.5
2.0
2.0
1.5
1.5
1.0
Density1.0 Density
0.5 0.5
0.0 0.0
0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0
Similarity Scores Similarity Scores
binary feedback failed test feedback failed & passed test feedback LDB feedback
Figure 6: Fine-grained feedback induces exploitative behavior. Distribution of consecutive code
similarity scores within dialog for different types of feedback, obtained from Llama 3.1 8B and 70B
samples (temperature 1.0). The higher the similarity scores between consecutive codes in the same
dialog, the more model exhibits exploitative behavior.
**Reasoning prompts are not additive.** It is tempting
to consider that stacking more reasoning prompts before
code generation will further guide the model towards correct solutions. For example, prompts might increase the
granularity of reasoning: self-reflect on the problem, explain the input/output pairs, write helper functions, and finally output a full code solution. However, we empirically
find that across models, one step of reasoning provides the
most significant boost. The performance plateaus or even
decreases with two or three steps. Increasing the number
of reasoning steps hurts both Llama 3.0 and 3.1 models
(see Table 5 in Appendix C.4). For the best models, a
single step with a reasoning prompt is most beneficial.
**56**
**54**
**52**
**50**
**48**
**46**
|Col1|Col2|Col3|Col4|
|---|---|---|---|
|||||
|||||
|||||
|bas|e model|||
|bas RFT|e model model|+CoT-re|try|
**base model**
**base model+CoT-retry**
**RFT model**
**Max Turns**
(see Table 5 in Appendix C.4). For the best models, a Figure 7: Reasoning and execution feedsingle step with a reasoning prompt is most beneficial. _back prompts, and RFT, enhance both_
single- and multi-turn performance for
**_CoT-retry works best._** For Llama 3.0 models, simply ex- Llama 3.1 70B.
tending the single turn reasoning and instruction prompts
to the multi-turn setting yields superior performance (reported as "Multi-turn CoT" in Table 1).
However, as models become more capable, an increasing number of problems in CodeContests are
solved in the first attempt without specific prompts. CoT-retry only reasons when the first attempt fails
and therefore works best across Llama 3.1 models for all sampling sizes and benchmarks ("Multi-turn
CoT" in Table 1). Figure 7 decomposes its per turn performance. When extending number of turn
from 2 to 3, Llama 3.1 70B alone shows diminishing gain while combination with CoT-retry still
increases the performance by a large margin.
**Execution feedback granularity determines exploration-exploitation behavior.** Given previous
incorrect code and execution feedback, subsequent attempts can consist of a fresh attempt (exploration)
or of updates to prior solutions based on feedback (exploitation). We quantify this behavior by
computing similarity scores between two consecutive solutions (details in Appendix B.1). Figure 6
shows that with more fine-grained information provided via execution feedback, models exhibit
exploitative behavior (high similarity scores). Exploitation can be a desired property on relatively
easy problems where errors are due to simple bugs. However, we posit that diversity is key to
improving performance on difficult problems, i.e., exploratory behavior within a trajectory based on
the execution feedback prompts. This matches our experimental results: simple execution feedback
(e.g., binary, failed tests) provide optimal performance for most models (Appendix C.6).
5.3 COT REJECTION SAMPLING FINE-TUNING: MODELS CAN INTERNALIZE REASONING
We investigate whether LLMs can benefit from finetuning on reasoning traces obtained via CoT
prompting. We thus perform Rejection Sampling Finetuning (RFT) on Llama 3.1 70B, where the
_reasoning, instruction and execution feedback prompting strategies we consider act as policy improve-_
_ment operators: they elicit the model’s reasoning ability and produce a higher number of trajectories_
-----
Table 4: Multi-turn CoT and RFT generalize to TACO test set. Pass n@k (%) of Llama 3.1
70B on multi-turn TACO test set with temperature 1.0. We use the best multi-turn CoT found on
CodeContests. We use the model RFTed on CodeContests training set (after decontamination, details
in Appendix F) and report its performance directly on TACO without CoT.
easy medium medium_hard hard very_hard
Model
1@3 100@300 1@3 100@300 1@3 100@300 1@3 100@300 1@3 100@300
Llama 3.1 70B 31.6 **60.2** 14.2 44.6 9.5 36.2 4.4 20.6 1.8 9.0
+ Multi-turn CoT 32.3 59.8 15.0 **46.2** 10.8 38.5 5.8 22.8 2.6 **11.8**
Llama 3.1 70B[RFT] **34.1** 58.9 **18.0** 45.3 **13.0** **39.4** **8.1** **23.3** **3.5** **12.0**
Within Correct Trajectories Within Incorrect Trajectories
250
2000
200
1500
150
Count Count1000
100
50 500
0 0
0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0
Similarity Scores Similarity Scores
before multi-turn CoT RFT after multi-turn CoT RFT
Figure 8: RFT makes the model produce more diverse code within trajectories as shown by
the consecutive codes’ similarity scores before/after RFT on CodeContests test set evaluated with
multi-turn no CoT. This shift towards more exploratory behavior contributes majorly the gain on the
correct trajectories.
CodeContests/test
60
50
40
pass k@3k
30 Base Model. No CoTBase Model. Best CoT
RFT Model. No CoT
10[0] 10[1] 10[2]
k
Figure 9: Llama 3.1 70B’s pass
_k@3k on CodeContests. CoT-retry_
increases the performance in large
sampling regimes. RFT transfers
this reasoning ability to no CoT setting and lifts the pass rate curve
across sampling budgets.
with correct submissions. Given the low variance across different feedback types (Table 6 in Appendix C.6), we opt for
simplicity and use the "failed tests" execution feedback combined with CoT-retry for data generation.
More specifically, we improve a model π by 1) collecting a
dataset of correct trajectories sampled from π with CoT enabled
at inference time, 2) removing the CoT prompt in the collected
trajectories, and 3) finetuning π with the standard next-token
prediction objective. With this strategy, we can now obtain CoTlevel trajectories without adding specific prompts at inference
time.
Figure 9, Table 1, and Table 4 show that the RFT model provides additional gains over inference methods across sampling
sizes and datasets. Beyond performance, RFT on multi-turn
CoT improves sampling diversity (Figure 8) and self-repair
capacities, especially for long trajectories (Figure 7). Behaviorwise, we show in Table ?? (Appendix C.5) that RFT results in
model responses with increased textual content.
6 RELATED WORK
**Chain of Thought with Code** Chain of Thought (CoT) enables step by step thinking for LLMs
to solve mathematical word problems in either few-shot (Wei et al., 2022) or zero-shot (Kojima
et al., 2022) settings. Many variants, e.g. Tree of Thought (Yao et al., 2023a), have emerged in code
generation since. Chen et al. (2023b) and Gao et al. (2023) translate natural language mathematical
problems in executable code for the model to separate reasoning and computation. These methods
rely on the LLM outputting correct code to represent a problem. We see this work as tangential to
ours as boosting LLM coding performance will also help on overall reasoning tasks. Higher levels of
abstractions (Khot et al., 2023; Zhou et al., 2024; 2023; Zelikman et al., 2023; Jain et al., 2024) and
-----
self-repair techniques (Paul et al., 2024; Li et al., 2023a; Ridnik et al., 2024) have been proposed.
Beyond inference methods,Wadhwa et al. (2024); Yu et al. (2024); Zelikman et al. (2022); Hosseini
et al. (2024); Pang et al. (2024) explore new training algorithms and loss functions to learn from CoT.
In comparison, we bring novelty to the type of CoT used in training (multi-turn) and rely on simple
Rejection Sampling Fine-tuning (RFT) (Touvron et al., 2023; Yuan et al., 2023; AI @ Meta, 2024). It
has been shown to achieve good performance, with less data compared to SFT (Setlur et al., 2024).
**Execution feedback** Currently LLMs struggle to understand code execution feedback (Gu et al.,
2024) as this type of data is rarely present in their training set. Zhong et al. (2024) and Ni et al. (2024)
try to mimic "print debugging" to convey intermediate code steps to the LLM. Olausson et al. (2024)
found that the effect of self-repair largely depends on the text quality of the subsequent reasoning and
therefore use only textual feedback. In our setting, we are interested in the feedback which could be
directly harvested from the execution environment. Shi et al. (2022); Li et al. (2022); Chen et al.
(2023a) likewise proposed unit test generation as a way to increase coverage with execution feedback.
Adding test generation to our pipeline would be an interesting avenue for further work.
**Inference Optimization** With the rise of LLM agents (Kapoor et al., 2024) and the scaling effect
of test time techniques (Li et al., 2022; Snell et al., 2024; Brown et al., 2024), inference optimization
against compute resources becomes increasingly relevant. Similar to our pass n@k argument in
Section 2.2, Kapoor et al. (2024) discuss the importance of controlling for generation cost in AI
agent evaluations.
7 LIMITATIONS
In our multi-turn setting, we do not explore further branching at the second or third turn, i.e., more
complex tree structures (Tang et al., 2024) or in general inference-based search approaches (Snell
et al., 2024), e.g. with look-ahead or backtracking, as we focus on the effect of additional CoT tokens
generation. Although a maximally fair comparison (at the cost of complexity) should account for total
input and output tokens (Olausson et al., 2024) as well as model size (Hassid et al., 2024), we believe
pass n@k, which stresses the number of code attempts, constitutes a simple yet superior alternative
to pass@k. Our RFT is similar to Expert Iteration (Anthony et al., 2017) and ReST (Gulcehre
et al., 2023) when considering a single iteration only. Future work could benefit from exploring
more advanced training strategies in conjunction with multi-turn CoT to potentially further enhance
reasoning capabilities. Finally, we speculate that the effectiveness of different prompts for different
LLM families (particularly the Llama 3.0 vs. 3.1 series vs. GPT-4o) could be attributed to the mixture
of finetuning data (Chung et al., 2022). Exploration of this topic is beyond the scope of this paper.
8 CONCLUSION
In this work, we present a comprehensive experimental survey on various reasoning, instruction
and execution feedback prompts in the single-turn and multi-turn code generation task at scale.
Our results on two competitive programming benchmarks, CodeContests and TACO, suggest that
incorporating CoT techniques, originally designed for single turns, and execution feedback prompts
into the multi-turn setting is non-trivial. Due to the difficulty of the benchmarks, a major contributor
to performance is problem understanding rather than the ability to perform code repair with detailed
feedback. With a set compute budget, using multiple turns alone can hamper performance compared
to repeated sampling with high temperatures. Biasing the model with adapted CoT based on problem
difficulty at each turn boosts its self-repair abilities and leads to consistent gains across all model
series and sizes. Beyond inference methods, our RFT experiment shows that multi-turn reasoning
traces triggered by prompts can be internalized, which leads to advanced reasoning abilities. We hope
that our findings motivate further research in more advanced multi-turn settings. One example is
repository-level code agents, where models interact with complex environments to gather feedback
and extensive planning and reasoning capabilities are demanded.
-----
REPRODUCIBILITY STATEMENT
As our paper focuses on inference methods with existing models, the key components for reproducibility are: access to models, datasets and prompt descriptions. All the models (except our fine-tuned
RFT model) used in this paper are publicly available at the time of writing: Meta Llama 3.0 and 3.1
series are open-weight and gpt-4o-2024-05-13 (GPT-4o in the paper) are available through OpenAI
[API. The two benchmarks we use: CodeContests (https://github.com/google-deepmind/code_](https://github.com/google-deepmind/code_contests)
[contests) and TACO (https://github.com/FlagOpen/TACO) are publicly available. We provide](https://github.com/google-deepmind/code_contests)
a complete list of all our prompts in Appendix D to reproduce single turn and multi-turn experiments.
We present the details of computing similarity score with normalization in Appendix B.1. Regarding
finetuning, our main contribution relies on the data augmentation technique on CodeContests for
which we present the details in data collection, deduplication, and decontamination approach, as
well as statistics such as the number of trajectories and the number of total tokens in Appendix B.2,
C.8 and F. We detail our finetuning hyperparameters in Appendix B.2 to reproduce our RFT model
training. We will release the code for our multi-turn and CoT methods to facilitate reproduction.
ACKNOWLEDGEMENT
We thank Quentin Carbonneaux, Baptiste Rozière, Jade Copet, Olivier Duchenne, Fabian Glöeckle,
Badr Youbi Idrissi, Nicolas Usunier, Sten Sootla, Chris Cummins, Sida Wang, Pierre Chambon,
Matthieu Dinot, Ori Yoran, Kush Jain, Naman Jain and all the members in FAIR CodeGen team for
helpful technical contributions, suggestions, and insightful discussions. We thank the Infra team for
the support for enabling a seamless compute cluster experience.
REFERENCES
Llama Team AI @ Meta. The Llama 3 Herd of Models, 2024.
Google DeepMind AlphaCode Team. AlphaCode 2 Technical Report. Technical report, 2023.
Thomas Anthony, Zheng Tian, and David Barber. Thinking fast and slow with deep learning and tree
search. In Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus,
S. V. N. Vishwanathan, and Roman Garnett (eds.), Advances in Neural Information Processing
_Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9,_
_[2017, Long Beach, CA, USA, pp. 5360–5370, 2017. URL https://proceedings.neurips.cc/](https://proceedings.neurips.cc/paper/2017/hash/d8e1344e27a5b08cdfd5d027d9b8d6de-Abstract.html)_
[paper/2017/hash/d8e1344e27a5b08cdfd5d027d9b8d6de-Abstract.html.](https://proceedings.neurips.cc/paper/2017/hash/d8e1344e27a5b08cdfd5d027d9b8d6de-Abstract.html)
Bradley Brown, Jordan Juravsky, Ryan Ehrlich, Ronald Clark, Quoc V. Le, Christopher Ré, and
Azalia Mirhoseini. Large language monkeys: Scaling inference compute with repeated sampling,
[2024. URL https://arxiv.org/abs/2407.21787.](https://arxiv.org/abs/2407.21787)
Bei Chen, Fengji Zhang, Anh Nguyen, Daoguang Zan, Zeqi Lin, Jian-Guang Lou, and Weizhu
Chen. Codet: Code generation with generated tests. In The Eleventh International Conference on
_Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023a._
[URL https://openreview.net/forum?id=ktrw68Cmu9c.](https://openreview.net/forum?id=ktrw68Cmu9c)
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde De Oliveira Pinto, Jared
Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large
language models trained on code. arXiv preprint arXiv:2107.03374, 2021.
Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W. Cohen. Program of thoughts prompting:
Disentangling computation from reasoning for numerical reasoning tasks. Trans. Mach. Learn.
_[Res., 2023, 2023b. URL https://openreview.net/forum?id=YfZ4ZPt8zd.](https://openreview.net/forum?id=YfZ4ZPt8zd)_
Xinyun Chen, Maxwell Lin, Nathanael Schärli, and Denny Zhou. Teaching large language models to
self-debug. In The Twelfth International Conference on Learning Representations, ICLR 2024,
_[Vienna, Austria, May 7-11, 2024. OpenReview.net, 2024. URL https://openreview.net/](https://openreview.net/forum?id=KuPixIqPiq)_
[forum?id=KuPixIqPiq.](https://openreview.net/forum?id=KuPixIqPiq)
Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li,
Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun
-----
Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Alex Castro-Ros, Marie Pellat, Kevin
Robinson, Dasha Valter, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang,
Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny
Zhou, Quoc V. Le, and Jason Wei. Scaling instruction-finetuned language models. J. Mach. Learn.
_[Res., 25:70:1–70:53, 2022. URL https://jmlr.org/papers/v25/23-0870.html.](https://jmlr.org/papers/v25/23-0870.html)_
Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and
Graham Neubig. Pal: Program-aided language models. In International Conference on Machine
_Learning, pp. 10764–10799. PMLR, 2023._
Silin Gao, Jane Dwivedi-Yu, Ping Yu, Xiaoqing Ellen Tan, Ramakanth Pasunuru, Olga Golovneva,
Koustuv Sinha, Asli Celikyilmaz, Antoine Bosselut, and Tianlu Wang. Efficient tool use with
chain-of-abstraction reasoning. arXiv preprint arXiv:2401.17464, 2024.
Alex Gu, Wen-Ding Li, Naman Jain, Theo Olausson, Celine Lee, Koushik Sen, and Armando
Solar-Lezama. The counterfeit conundrum: Can code language models grasp the nuances of their
incorrect generations? In Lun-Wei Ku, Andre Martins, and Vivek Srikumar (eds.), Findings of
_the Association for Computational Linguistics, ACL 2024, Bangkok, Thailand and virtual meeting,_
_August 11-16, 2024, pp. 74–117. Association for Computational Linguistics, 2024. doi: 10.18653/_
[V1/2024.FINDINGS-ACL.7. URL https://doi.org/10.18653/v1/2024.findings-acl.7.](https://doi.org/10.18653/v1/2024.findings-acl.7)
Caglar Gulcehre, Tom Le Paine, Srivatsan Srinivasan, Ksenia Konyushkova, Lotte Weerts, Abhishek
Sharma, Aditya Siddhant, Alex Ahern, Miaosen Wang, Chenjie Gu, Wolfgang Macherey, Arnaud
Doucet, Orhan Firat, and Nando de Freitas. Reinforced self-training (rest) for language modeling,
[2023. URL https://arxiv.org/abs/2308.08998.](https://arxiv.org/abs/2308.08998)
Michael Hassid, Tal Remez, Jonas Gehring, Roy Schwartz, and Yossi Adi. The Larger the Better?
Improved LLM Code-Generation via Budget Reallocation. arXiv:2404.00725 [cs], Mar 2024.
Dan Hendrycks, Steven Basart, Saurav Kadavath, Mantas Mazeika, Akul Arora, Ethan Guo,
Collin Burns, Samir Puranik, Horace He, Dawn Song, and Jacob Steinhardt. Measuring coding challenge competence with APPS. In Joaquin Vanschoren and Sai-Kit Yeung (eds.), Proceedings of the Neural Information Processing Systems Track on Datasets
_and Benchmarks 1, NeurIPS Datasets and Benchmarks 2021, December 2021, virtual,_
[2021. URL https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/hash/](https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/hash/c24cd76e1ce41366a4bbe8a49b02a028-Abstract-round2.html)
[c24cd76e1ce41366a4bbe8a49b02a028-Abstract-round2.html.](https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/hash/c24cd76e1ce41366a4bbe8a49b02a028-Abstract-round2.html)
Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. The curious case of neural text
degeneration. In ICLR. OpenReview.net, 2020.
Arian Hosseini, Xingdi Yuan, Nikolay Malkin, Aaron Courville, Alessandro Sordoni, and Rishabh
Agarwal. V-star: Training verifiers for self-taught reasoners. arXiv preprint arXiv:2402.06457,
2024.
Md Ashraful Islam, Mohammed Eunus Ali, and Md Rizwan Parvez. MapCoder: Multi-Agent Code
Generation for Competitive Problem Solving. arXiv:2405.11403 [cs], May 2024.
Naman Jain, Tianjun Zhang, Wei-Lin Chiang, Joseph E. Gonzalez, Koushik Sen, and Ion Stoica.
Llm-assisted code cleaning for training accurate code generators. In The Twelfth International
_Conference on Learning Representations, ICLR 2024, Vienna, Austria, May 7-11, 2024. OpenRe-_
[view.net, 2024. URL https://openreview.net/forum?id=maRYffiUpI.](https://openreview.net/forum?id=maRYffiUpI)
Sayash Kapoor, Benedikt Stroebl, Zachary S Siegel, Nitya Nadgir, and Arvind Narayanan. Ai agents
that matter. arXiv preprint arXiv:2407.01502, 2024.
Tushar Khot, Harsh Trivedi, Matthew Finlayson, Yao Fu, Kyle Richardson, Peter Clark, and Ashish
Sabharwal. Decomposed prompting: A modular approach for solving complex tasks. In The
_Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May_
_[1-5, 2023. OpenReview.net, 2023. URL https://openreview.net/forum?id=_nGgzQjzaRy.](https://openreview.net/forum?id=_nGgzQjzaRy)_
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large
language models are zero-shot reasoners. In Sanmi Koyejo, S. Mohamed, A. Agarwal, Danielle
Belgrave, K. Cho, and A. Oh (eds.), Advances in Neural Information Processing Systems 35:
-----
_Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans,_
_[LA, USA, November 28 - December 9, 2022, 2022. URL http://papers.nips.cc/paper_files/](http://papers.nips.cc/paper_files/paper/2022/hash/8bb0d291acd4acf06ef112099c16f326-Abstract-Conference.html)_
[paper/2022/hash/8bb0d291acd4acf06ef112099c16f326-Abstract-Conference.html.](http://papers.nips.cc/paper_files/paper/2022/hash/8bb0d291acd4acf06ef112099c16f326-Abstract-Conference.html)
Hung Le, Hailin Chen, Amrita Saha, Akash Gokul, Doyen Sahoo, and Shafiq Joty. Codechain: Towards modular code generation through chain of self-revisions with representative sub-modules. In
_The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna, Aus-_
_tria, May 7-11, 2024. OpenReview.net, 2024._ [URL https://openreview.net/forum?id=](https://openreview.net/forum?id=vYhglxSj8j)
[vYhglxSj8j.](https://openreview.net/forum?id=vYhglxSj8j)
Jierui Li, Szymon Tworkowski, Yingying Wu, and Raymond Mooney. Explaining competitive-level
programming solutions using llms. arXiv preprint arXiv:2307.05337, 2023a.
Rongao Li, Jie Fu, Bo-Wen Zhang, Tao Huang, Zhihong Sun, Chen Lyu, Guang Liu, Zhi Jin, and
[Ge Li. Taco: Topics in algorithmic code generation dataset, 2023b. URL https://arxiv.org/](https://arxiv.org/abs/2312.14852)
[abs/2312.14852.](https://arxiv.org/abs/2312.14852)
Yujia Li, David Choi, Junyoung Chung, Nate Kushman, Julian Schrittwieser, Rémi Leblond, Tom
Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, et al. Competition-level code generation
with alphacode. Science, 378(6624):1092–1097, 2022.
Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri
Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, et al. Self-refine: Iterative refinement
with self-feedback. Advances in Neural Information Processing Systems, 36, 2024.
Ansong Ni, Miltiadis Allamanis, Arman Cohan, Yinlin Deng, Kensen Shi, Charles Sutton, and
Pengcheng Yin. Next: Teaching large language models to reason about code execution. In Forty_first International Conference on Machine Learning, ICML 2024, Vienna, Austria, July 21-27,_
_[2024. OpenReview.net, 2024. URL https://openreview.net/forum?id=B1W712hMBi.](https://openreview.net/forum?id=B1W712hMBi)_
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, and Armando Solar-Lezama.
Is self-repair a silver bullet for code generation? In The Twelfth International Conference on
_Learning Representations, ICLR 2024, Vienna, Austria, May 7-11, 2024. OpenReview.net, 2024._
[URL https://openreview.net/forum?id=y0GJXRungR.](https://openreview.net/forum?id=y0GJXRungR)
OpenAI. Gpt-4 technical report. arXiv:abs/2303.08774, 2023.
Richard Yuanzhe Pang, Weizhe Yuan, Kyunghyun Cho, He He, Sainbayar Sukhbaatar, and Jason
Weston. Iterative reasoning preference optimization. arXiv preprint arXiv:2404.19733, 2024.
Debjit Paul, Mete Ismayilzada, Maxime Peyrard, Beatriz Borges, Antoine Bosselut, Robert West, and
Boi Faltings. REFINER: reasoning feedback on intermediate representations. In Yvette Graham
and Matthew Purver (eds.), Proceedings of the 18th Conference of the European Chapter of the
_Association for Computational Linguistics, EACL 2024 - Volume 1: Long Papers, St. Julian’s,_
_Malta, March 17-22, 2024, pp. 1100–1126. Association for Computational Linguistics, 2024. URL_
[https://aclanthology.org/2024.eacl-long.67.](https://aclanthology.org/2024.eacl-long.67)
Tal Ridnik, Dedy Kredo, and Itamar Friedman. Code generation with alphacodium: From prompt
engineering to flow engineering. arXiv preprint arXiv:2401.08500, 2024.
Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi
Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov,
Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre
Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas
Scialom, and Gabriel Synnaeve. Code llama: Open foundation models for code, 2024. URL
[https://arxiv.org/abs/2308.12950.](https://arxiv.org/abs/2308.12950)
Amrith Setlur, Saurabh Garg, Xinyang Geng, Naman Garg, Virginia Smith, and Aviral Kumar. Rl
on incorrect synthetic data scales the efficiency of llm math reasoning by eight-fold, 2024. URL
[https://arxiv.org/abs/2406.14532.](https://arxiv.org/abs/2406.14532)
-----
Freda Shi, Daniel Fried, Marjan Ghazvininejad, Luke Zettlemoyer, and Sida I. Wang. Natural
language to code translation with execution. In Yoav Goldberg, Zornitsa Kozareva, and Yue
Zhang (eds.), Proceedings of the 2022 Conference on Empirical Methods in Natural Language
_Processing, EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11, 2022, pp. 3533–_
3546. Association for Computational Linguistics, 2022. doi: 10.18653/V1/2022.EMNLP-MAIN.
[231. URL https://doi.org/10.18653/v1/2022.emnlp-main.231.](https://doi.org/10.18653/v1/2022.emnlp-main.231)
Noah Shinn, Federico Cassano, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao.
Reflexion: language agents with verbal reinforcement learning. In Alice Oh, Tristan
Naumann, Amir Globerson, Kate Saenko, Moritz Hardt, and Sergey Levine (eds.), Ad_vances in Neural Information Processing Systems 36:_ _Annual Conference on Neural In-_
_formation Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December_
_10 - 16, 2023, 2023._ [URL http://papers.nips.cc/paper_files/paper/2023/hash/](http://papers.nips.cc/paper_files/paper/2023/hash/1b44b878bb782e6954cd888628510e90-Abstract-Conference.html)
[1b44b878bb782e6954cd888628510e90-Abstract-Conference.html.](http://papers.nips.cc/paper_files/paper/2023/hash/1b44b878bb782e6954cd888628510e90-Abstract-Conference.html)
Charlie Snell, Jaehoon Lee, Kelvin Xu, and Aviral Kumar. Scaling llm test-time compute optimally
[can be more effective than scaling model parameters, 2024. URL https://arxiv.org/abs/2408.](https://arxiv.org/abs/2408.03314)
[03314.](https://arxiv.org/abs/2408.03314)
Hao Tang, Keya Hu, Jin Peng Zhou, Sicheng Zhong, Wei-Long Zheng, Xujie Si, and Kevin Ellis.
[Code repair with llms gives an exploration-exploitation tradeoff, 2024. URL https://arxiv.](https://arxiv.org/abs/2405.17503)
[org/abs/2405.17503.](https://arxiv.org/abs/2405.17503)
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay
Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu,
Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn,
Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel
Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee,
Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra,
Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi,
Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh
Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen
Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic,
Sergey Edunov, and Thomas Scialom. Llama 2: Open foundation and fine-tuned chat models,
[2023. URL https://arxiv.org/abs/2307.09288.](https://arxiv.org/abs/2307.09288)
Somin Wadhwa, Silvio Amir, and Byron C Wallace. Investigating mysteries of cot-augmented
distillation. arXiv preprint arXiv:2406.14511, 2024.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny
Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in
_Neural Information Processing Systems, 35:24824–24837, 2022._
Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Tom Griffiths, Yuan Cao, and Karthik
Narasimhan. Tree of thoughts: Deliberate problem solving with large language models. In
Alice Oh, Tristan Naumann, Amir Globerson, Kate Saenko, Moritz Hardt, and Sergey Levine
(eds.), Advances in Neural Information Processing Systems 36: Annual Conference on Neu_ral Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, Decem-_
_[ber 10 - 16, 2023, 2023a. URL http://papers.nips.cc/paper_files/paper/2023/hash/](http://papers.nips.cc/paper_files/paper/2023/hash/271db9922b8d1f4dd7aaef84ed5ac703-Abstract-Conference.html)_
[271db9922b8d1f4dd7aaef84ed5ac703-Abstract-Conference.html.](http://papers.nips.cc/paper_files/paper/2023/hash/271db9922b8d1f4dd7aaef84ed5ac703-Abstract-Conference.html)
Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik R. Narasimhan, and Yuan Cao.
React: Synergizing reasoning and acting in language models. In The Eleventh International Confer_ence on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net,_
[2023b. URL https://openreview.net/forum?id=WE_vluYUL-X.](https://openreview.net/forum?id=WE_vluYUL-X)
Ping Yu, Jing Xu, Jason Weston, and Ilia Kulikov. Distilling system 2 into system 1, 2024. URL
[https://arxiv.org/abs/2407.06023.](https://arxiv.org/abs/2407.06023)
Zheng Yuan, Hongyi Yuan, Chengpeng Li, Guanting Dong, Keming Lu, Chuanqi Tan, Chang Zhou,
and Jingren Zhou. Scaling relationship on learning mathematical reasoning with large language
[models, 2023. URL https://arxiv.org/abs/2308.01825.](https://arxiv.org/abs/2308.01825)
-----
Eric Zelikman, Yuhuai Wu, Jesse Mu, and Noah Goodman. Star: Bootstrapping reasoning with
reasoning. Advances in Neural Information Processing Systems, 35:15476–15488, 2022.
Eric Zelikman, Qian Huang, Gabriel Poesia, Noah Goodman, and Nick Haber. Parsel: Algorithmic
reasoning with language models by composing decompositions. Advances in Neural Information
_Processing Systems, 36:31466–31523, 2023._
Li Zhong, Zilong Wang, and Jingbo Shang. Ldb: A large language model debugger via verifying
runtime execution step-by-step. arXiv preprint arXiv:2402.16906, 2024.
Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans,
Claire Cui, Olivier Bousquet, Quoc V. Le, and Ed H. Chi. Least-to-most prompting enables
complex reasoning in large language models. In The Eleventh International Conference on
_Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023._
[URL https://openreview.net/forum?id=WZH7099tgfM.](https://openreview.net/forum?id=WZH7099tgfM)
Pei Zhou, Jay Pujara, Xiang Ren, Xinyun Chen, Heng-Tze Cheng, Quoc V. Le, Ed H. Chi, Denny
Zhou, Swaroop Mishra, and Huaixiu Steven Zheng. Self-discover: Large language models
[self-compose reasoning structures, 2024. URL https://arxiv.org/abs/2402.03620.](https://arxiv.org/abs/2402.03620)
-----
A FORMULA AND ALGORITHM FOR PASS n@k METRICS
Formally, let N be the total number of code samples. Let F be the number of codes filtered by public
tests, among which there could be false positive. Let C be the number of correct codes that pass all
the unit tests. The pass n@k for a benchmark of problems is defined as follows:
pass n@k = EProblems "1 − _i=0k_ Fi NkNk−−Fi ! F Fnn −ppC !# _,_ (1)
X
where np = min(i, n).
**Explanation** The first term [(]Fi [)(]NNk−−Fi [)] is the probability of having i filtered solutions among k
( _k_ [)]
solutions, which obeys a hyper-geometric distribution, HYPERGEOMETRIC(F, N _F, k). Given_
_F −C_ _−_
_np_ [)]
the number of submissions np = min(i, n), the second term [(] _F_
(np[)][ is the probability of having none]
of the correct solutions.
In evaluation, instead of computing the combinatorial number, we use monte-carlo estimation by
re-sampling k solutions nboot times for bootstrapping (in our case we use 10000). The algorithm for
such is described in details in the Appendix A.3 of the Alphacode paper (Li et al., 2022).
B EXPERIMENT DETAILS
B.1 COMPUTING SIMILARITY SCORE
We compute the similarity score of two python code snippets as follows.
First, we pre-process the code snippet to remove formatting and variable naming effect. We normalize
variable names by running an in-order indexing scheme on the Abstract-Syntax-Tree (AST), as shown
in Figure 10, followed by a simple formatting by lambda x: ast.unparse(ast.parse(x)). We
note that there are 1%-2% of codes failing the parsing because of syntax error, in which case we skip
this normalization step.
**total_n =** **int(input())** **v_0 =** **int(input())**
**for _ in** **range(total_n):** **for v_1 in** **range(v_0):**
**S_num, P_num =** **list(map(int, input().split()))** **(v_2, v_3) =** **list(map(int, input().split()))**
Figure 10: Example of variable renaming AST pass.
Second, we use difflib.SequenceMatcher to compute the similarity score for the normalized
snippets.
B.2 FINE-TUNING SETTING
Using CoT-retry, we generate 200 code trajectories with maximum 3 code attempts. Note that if the
model solves the problem in the first turn, the trajectory will still be collected while there will not be
_execution feedback. After filtering the incorrect trajectories, we keep only 60% the ones where the_
code in the last turn passes all the tests. We assume that correct final code correlates with correct
reasoning in the CoT and self-repair techniques. We apply additional post-processing to the traces by
removing the CoT prompt introduced, enabling the model to develop inherent CoT-like reasoning
capabilities through fine-tuning. We separate the successful trajectories into 2 sets, single-turn
trajectories and multi-turn trajectories, and conduct LSH-based deduplication on each set to the code
solutions per problem instance to maximum 50 solutions. We perform self-supervised fine-tuning on
the multi-turn trajectories, using standard cross-entropy loss on the last full body of model response.
The fine-tuning use learning rate 2e-6, 545 steps of gradient updates, sequence length 8192, global
batch size 524288 tokens, cosine scheduling with 10 warmup steps and annealing to 10% of peak
learning rate.
-----
C ADDITIONAL EXPERIMENT RESULTS
C.1 SINGLE-TURN GRID SEARCH RESULTS
We provide the complete grid search results for all our reasoning and instruction prompts across all
models and pass rates for the single turn setting. This demonstrates the variability in effectiveness per
sampling size and LLM series. The "weak solution" instruction prompt is a clear winner for higher
sampling sizes k ≥ 10. We show in Figure 11, 12, 13 and 14 the grid search of all reasoning and
_instruction prompts for the Llama 3.0 and 3.1 series. As we increase the pass@, we increase the_
sample diversity and the recall across all CoT. For low pass@, most prompts hurt performance. CoT
is the most effective with Llama 3.0 70B.
CodeContest Test Set - LLaMA 3.1 8B
**Pass@1** **Pass@10** **Pass@100**
clever algorithm Δ
weak solution
2
func. docstring
comment for line
0
check constraints
use helper functions
−2
no instruction
self-reflectionNO CoTexplain IO pairsproblem tagproblem difficultymultiple solutionsNL solutionwrite help. docstring self-reflectionNO CoTexplain IO pairsproblem tagproblem difficultymultiple solutionsNL solutionwrite help. docstring self-reflectionNO CoTexplain IO pairsproblem tagproblem difficultymultiple solutionsNL solutionwrite help. docstring
Figure 11: Grid search of all reasoning and instruction prompts for Llama 3.1 8B.
CodeContest Test Set - LLaMA 3.0 8B
**Pass@1** **Pass@10** **Pass@100**
clever algorithm Δ
weak solution 2
func. docstring 1
comment for line
0
check constraints
−1
use helper functions
−2
no instruction
self-reflectionNO CoTexplain IO pairsproblem tagproblem difficultymultiple solutionsNL solutionwrite help. docstring self-reflectionNO CoTexplain IO pairsproblem tagproblem difficultymultiple solutionsNL solutionwrite help. docstring self-reflectionNO CoTexplain IO pairsproblem tagproblem difficultymultiple solutionsNL solutionwrite help. docstring
Figure 12: Grid search of all reasoning and instruction prompts for Llama 3.0 8B.
-----
CodeContest Test Set - LLaMA 3.1 70B
**Pass@1** **Pass@10** **Pass@100**
clever algorithm Δ
weak solution
2
func. docstring
comment for line
0
check constraints
use helper functions −2
no instruction
self-reflectionNO CoTexplain IO pairsproblem tagproblem difficultymultiple solutionsNL solutionwrite help. docstring self-reflectionNO CoTexplain IO pairsproblem tagproblem difficultymultiple solutionsNL solutionwrite help. docstring self-reflectionNO CoTexplain IO pairsproblem tagproblem difficultymultiple solutionsNL solutionwrite help. docstring
Figure 13: Grid search of all reasoning and instruction prompts for Llama 3.1 70B.
CodeContest Test Set - LLaMA 3.0 70B
**Pass@1** **Pass@10** **Pass@100**
clever algorithm Δ
weak solution
2
func. docstring
comment for line
0
check constraints
use helper functions −2
no instruction
self-reflectionNO CoTexplain IO pairsproblem tagproblem difficultymultiple solutionsNL solutionwrite help. docstring self-reflectionNO CoTexplain IO pairsproblem tagproblem difficultymultiple solutionsNL solutionwrite help. docstring self-reflectionNO CoTexplain IO pairsproblem tagproblem difficultymultiple solutionsNL solutionwrite help. docstring
Figure 14: Grid search of all reasoning and instruction prompts for Llama 3.0 70B.
CodeContest Test Set - Pass 1@3 - Single Turn
**gpt40** **LLaMA 3.1 8B** **LLaMA 3.1 70B** **LLaMA 3.1 405B**
clever algorithm Δ
weak solution
5
func. docstring
comment for line
0
check constraints
use helper functions −5
no instruction
self-reflectionexplain IO pairsproblem difficultyproblem tagmultiple solutionsNL solutionhelp. docstringself-reflectionexplain IO pairsproblem difficultyproblem tagmultiple solutionsNL solutionhelp. docstringself-reflectionexplain IO pairsproblem difficultyproblem tagmultiple solutionsNL solutionhelp. docstringself-reflectionexplain IO pairsproblem difficultyproblem tagmultiple solutionsNL solutionhelp. docstring
Figure 15: No gold CoT across models. Based on our grid search of instruction and reasoning
prompts, we compare all 63 single turn results across three different models. With a low sampling
budget, most prompts perform similarly if not worst than the baseline performance (without CoT).
The best prompt (in green) differs for each model but we see similar patterns with the Llama models.
-----
C.2 DETAILED ANALYSIS OF SINGLE-TURN PROMPTS
When comparing reasoning and instruction prompts, the values are normalized with respect to the
baseline in each respective pass rate specifically: x std(x) . The value at 0 therefore corresponds
_←_ _[x][−][baseline]_
to no reasoning and no instruction prompts. We provide further results aggregated across models and
types of prompts. As demonstrated by 17 and 18, we have large variations across models and prompt
types and observe that no reasoning and instruction prompt always performs above the 0 baseline. As
shown in Figure 16, the best combinations often rely on "weak solution" instruction but vary across
sample sizes for reasoning with "self reflection" for lower pass@s and "helper functions" for higher
pass@s. We observed writing intermediate variables before code often made performance worse and
could be qualified as the "worst" reasoning prompt for all models.
**Per Pass@: Top 3 Mean of Prompt Combinations across Llama 3.0, 3.1 and GPT-4o**
**Reason, Instruct combinations:**
**self-reflection, weak solution**
**2** **explain IO pairs, weak solution**
**self-reflection, check constraints**
**1.8** **NL solution, weak solution**
**helper functions docstring, weak solution**
**code solution, weak solution**
**1.6** **helper functions docstring, no instruction**
**1.4**
**1.2**
**Δ from baseline pass@k**
**1**
**1** **2** **5** **10** **2** **5** **100**
**k**
Figure 16: For pass@1, we average over all 6 models (3.0 8B, 70B, 3.1 8B, 70B, 405B and GPT-4o)
and pass@10 and pass@100 over 4 models (Llama 3.0, 3.1 8B and 70B). We take the mean across
all results and plot the top 3 means per pass@.
**Instruction Prompts Across LLaMA 3.0 and 3.1 Series**
**Pass@1** **Pass@100**
**4** **4** **check constraints**
**clever algorithm**
**3** **3** **comment for line**
**func. docstring**
**2** **2** **use helper functions**
**weak solution**
**1** **1**
**0** **0**
**−1** **−1**
**Δ from baseline**
**−2** **−2**
**−3** **−3**
Figure 17: Group by instruction prompts averaged across all reasoning prompts for the Llama 3.0
and 3.1 models. We observe "check constraints" is a winner for pass@1 and "weak solution" for
pass@100. Overall, "add a comment before each line" seems the least efficient across models.
-----
**Reasoning Prompts Across LLaMA 8B Models**
**Pass@1** **Pass@100**
**4** **4** **NL solution**
**code solution**
**3** **3** **explain IO pairs**
**helper functions docstring**
**2** **2** **intermediate variables and type**
**problem difficulty**
**1** **1**
**problem tag**
**0** **0** **self-reflection**
**−1** **−1**
**Δ from baseline**
**−2** **−2**
**−3** **−3**
**Reasoning Prompts Across LLaMA 70B Models**
**Pass@1** **Pass@100**
**NL solution**
**3** **3** **code solution**
**explain IO pairs**
**2** **2** **helper functions docstring**
**intermediate variables and type**
**1** **1** **problem difficulty**
**problem tag**
**0** **0** **self-reflection**
**−1** **−1**
**Δ from baseline**
**−2** **−2**
**−3** **−3**
Figure 18: Group by reasoning prompts averaged across all instruction prompts (top) for small
models and (bottom) for large models. For pass@1, "explain IO pairs" helps small models and
"helper function docstrings" large ones. The relative efficacy of each prompt converges to a similar
orders for pass@100 for large and small models.
C.3 GENERALIZATION OF SINGLE-TURN BEST COT TO LLAMA3.1 MODELS
We show in Figure 19 and 20 that the best CoT (i.e. reasoning and instruction prompt and their
combination) found with Llama 3.0 8B on TACO could be directly ported to Llama 3.1 8B and 70B
models. We also observe the effect that CoT brings more boost on harder problems by comparing the
gain on the easy and very-hard split.
Best v.s. Worst CoT on TACO Test Set for Single Turn with Llama3.1-8B
EASY VERY_HARD
40
2
pass@k3020 pass@k1
10 0
10[0] 10[1] 10[2] 10[0] 10[1] 10[2]
k k
Best CoT No CoT
Figure 19: We use the best CoT (i.e. reasoning and instruction prompt combination) found with
Llama 3.0 8B and test it directly with Llama 3.1 8B on the easiest (easy) and the most difficult
(very-hard) split of TACO.
-----
Best v.s. Worst CoT on TACO Test Set for Single Turn with Llama3.1-70B
EASY VERY_HARD
50 8
6
40
pass@k pass@k4
30 2
10[0] 10[1] 10[2] 10[0] 10[1] 10[2]
k k
Best CoT No CoT
Figure 20: We use the best CoT (i.e. reasoning and instruction prompt combination) found with
Llama 3.0 8B and test it directly with Llama 3.1 70B on the easiest (easy) and the most difficult
(very-hard) split of TACO.
Table 5: Stacking more reasoning prompts can hurt performance for Llama 3.1 70B.
Llama 3.0 70B Llama 3.1 70B
Number of prompts
1@3 100@300 1@3 100@300
1 reasoning × 1 instruction 11.2 40.0 24.5 59.2
+ 1 reasoning -0.4 -1.8 -2.0 -3.1
+ 1 instruction -0.1 +0.4 -4.0 -2.1
C.4 REASONING PROMPTS NOT ADDITIVE
We describe methods that did not help enhance multi-turn CoT specifically adding more complex
execution feedback and more steps of reasoning prompts. Our experiment result shown in Table 6
that before outputing the first code, stacking more reasoning steps hurt the performance, especially
for Llama 3.1 70B.
C.5 RFT MODEL BEHAVIOR ANALYSIS
We show in Table ?? the fraction of text characters by the total response length. We take into account
the intermediary CoT response if CoT is used. RFT model significantly increases the text output
around the code output, which could contain reasoning traces.
captionFraction of text characters (not extracted as code) by the total response length. We count also
the CoT response when CoT is enabled. RFTed model outputs more text in the response.
Non-Code
Model
Fraction
Llama 3.1 70B 0.37
+ Multi-turn CoT 0.57
Llama 3.1 70B[RFT] 0.50
C.6 SIMPLE EXECUTION FEEDBACK IS SUFFICIENT
We show in the Table 6 that execution feedback prompts with different granularity present low
variance with respect to the pass rate, both in high temperature setting (1.0, pass 100@300) and low
temperature setting (0.2, pass 1@3).
We posit that for hard problems presented in the competitive programming benchmark, models
generate wrong code not because the code is buggy by accident but models do not understand how
to solve the problem correctly. It highlights the fact that for competitive programming benchmark,
algorithmic reasoning (to align what the models believe to be correct solution with the ground-true
-----
solution), as elicited by CoTs, impacts more the performance than bug-fixing ability (to align the
emitted code with what the model believe to be correct solution).
Table 6: Execution feedback result on multi-turn CodeContests test set. Results are reported using 3turn trajectories. 1@3 is estimated from 20 trajectories per problem under temperature 0.2. 100@300
is estimated from 200 trajectories per problem under temperature 1.0.
Llama 3.1 70B Llama 3.1 8B
Feedback Granularity
1@3 100@300 1@3 100@300
Binary + 28.8 55.9 **10.9** **30.9**
Failed tests (default) ++ **29.5** **56.2** **10.9** 29.5
Failed & passed tests ++ **29.5** 55.0 10.7 30.4
LDB (Zhong et al., 2024) +++ 26.5 54.8 9.9 29.1
C.7 ABLATION OF NORMALIZATION STEP IN SIMILARITY SCORE
We show in Figure 21 the distribution and histogram of similarity score without the normalization
step. The similarity score therefore measure the raw code generated by the LLM. Compared with
Figure 6 and 8, the fundamental trend does not change. The robustness to our normalization step
shows that the LLMs we study output coherent (in terms of variable naming and formatting) code
within the same dialog.
Llama3.1-8B Llama3.1-70B
2.5
2.0
2.0
1.5
1.5
1.0
Density1.0 Density
0.5 0.5
0.0 0.0
0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0
Similarity Scores Similarity Scores
binary feedback failed test feedback failed & passed test feedback LDB feedback
Figure 21: Distribution of consecutive code similarity scores (without the normalization step) when
varying the execution feedback granularity.
Within Correct Trajectories Within Incorrect Trajectories
250 2000
200
1500
150
1000
Count100 Count
50 500
0 0
0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0
Similarity Scores Similarity Scores
before multi-turn CoT RFT after multi-turn CoT RFT
Figure 22: Histogram of the similarity scores (without the normalization step) of consecutive codes
generated by the model before/after multi-turn CoT RFT on CodeContests test set.
C.8 ABLATION OF RFT DATA MIXTURE
As detailed in Appendix B.2, we collect 2 sets of correct trajectories, single-turn (ST) and multi-turn
(MT), on CodeContests training set and perform LSH-based deduplication to maximum 50 solutions
(in each set) per problem statement. We also decontaminate the 2 sets from TACO test set as detailed
in Appendix F.
-----
After deduplication and decontamination, we harvest 177475 trajectories (in total 143M tokens) and
160600 multi-turn trajectories (in total 285M tokens). We show the ablation of the data mixture:
- train on multi-turn (MT) trajectories only
- train on single-turn (ST) and multi-turn (MT) trajectories
We show in Table 7 the ablation result. We find that training on multi-turn data only provides better
performance. We hypothesize that the single-turn trajectories are the solutions to the problems of
which models are already capable. Further reinforcement on these problems could potentially lead to
overfitting and bias the model behavior towards trying to solve the problems in the first turn instead
of enhancing its multi-turn capability.
Table 7: Ablation of RFT data mixture. We show the best performance of two data mixture configurations: training on multi-turn (MT) data only and training on single-turn (ST) and multi-turn (MT)
data.
CodeContests / Test
RFT Data
1@3 10@30 100@300
MT 29.7 50.5 61.1
ST + MT 29.1 49.6 60.0
-----
D PROMPTS
We list the prompts used throughout our experiments inspired by recent works in code generation
(Zelikman et al., 2023; Jain et al., 2024; Paul et al., 2024; Ridnik et al., 2024). We focus on zero-shot
prompting techniques that are either specific to competitive programming problems or more general
to code generation. We classify prompts into two categories: reasoning and instruction. To determine
this list, we ran experiments at a small scale (pass@10) with over 30 prompts on 500 examples
sampled from the CodeContest training set. We picked the most promising ones in terms of final unit
test pass and execution rates. Some of our prompts are a adapted from recent works in competitive
programming.
D.1 REASONING PROMPTS
- Adapted from AlphaCodium Ridnik et al. (2024)
**– self-reflection: Given the code contest problem, reflect on the problem, and describe it in your**
own words, in bullet points. Pay attention to small details, nuances, notes and examples in the
problem description.
**– predict IO pairs: Given the code contest problem and the provided examples, take the first**
3 examples and explain how its input leads to the corresponding output. Read carefully the
problem description. Make sure the test explanations are consistent with them, and between
themselves. The explanation must coherently and logically lead from the input to the output. Be
succinct.
**– write code solution with guidelines: Your goal is to come up with possible solutions to the**
code contest problem. Guidelines: Make sure each solution fully addresses the problem goals,
constraints, examples, and notes. Each solution must have reasonable runtime and memory
complexity - less than three seconds on a modern computer, given the problem constraints for
large inputs. Double-check the solutions. Each possible solution must be able to generalize to
additional test cases, not just the ones provided in the problem description.
- predict problem tag: Explain which two tags from the following list best apply to this problem:
combinatorics, dynamic programming, math, bitmasks, number theory, brute force, data structures,
divide and conquer, graphs, greedy, depth first search and similar, implementation, binary search,
two pointers, strings, constructive algorithms, sortings, trees, disjoint set union.
- predict problem difficuly: Given the code contest problem, your task is to evaluate the difficulty
of the problem either easy, medium or hard. Explain the difficulties of the problem and potential
edge cases.
- write natural language solution: Generate a naive solution to this problem in natural language
and then explain how you could improve it.
- write helper function docstring: Explain which helper functions you will need to solve the code
contest problem. Without implementing them, write their signature and a doc string explaining
their purpose.
- write intermediate variables and type: Explain what necessary intermediate variables you will
need to solve the problem, specify their type and purpose in your solution.
D.2 INSTRUCTION PROMPTS
- Adapted from AlphaCodium Ridnik et al. (2024)
**– use helper functions: Guidelines: You must divide the generated code into small sub-functions,**
with meaningful names and functionality. Variables names should also be meaningful.
**– double check the import, variable, constraints: Double-check the solution code. Make sure**
to include all the necessary module imports, properly initialize the variables, and address the
problem constraints.
- comment before each line: Write a comment before each line of code to explain your thought
process.
- docstring before each function: Write a doc string before each function generated explaining its
utility and expected inputs and outputs.
-----
- generate weak solution and a second better one: Generate one solution in python, identify its
weaknesses and then generate a second better algorithm to solve the problem.
- step by step: Think step by step and propose a clever algorithm.
D.3 COT-RETRY PROMPTS
For a given problem, the model first solves it without any reasoning. If the first code generation is
incorrect, we add an instruction prompt "write code solution with guidelines" to separate problem
solving and code writing. If the second code generation is false, we add another instruction prompt
"generate a naive solution". These prompts are combined with the default (failed tests) execution
feedback after each turn. This setup provides up to +6% improvements for the Llama 3.1 series. We
show an example extracted from CodeContests generated by Llama 3.1 70B on the following page.
User
Here is a competitive programming question: The new generation external memory contains
an array of integers a[1 . . . n] = [a_1 , a_2 , . . . , a_n].
This type of memory does not support changing the value of an arbitrary element.
Instead , it allows you to cut out any segment of the given array , cyclically shift (
rotate) it by any offset and insert it back into the same place.
Technically , each cyclic shift consists of two consecutive actions:
the segment.1. You may select arbitrary indices l and r (1 ≤ l ≤ r ≤ n) as the boundaries of
2. Then you replace the segment a[l. . . r] with it 's cyclic shift to the left by an
arbitrary offset d. The concept of a cyclic shift can be also explained by following
relations: the sequence [1, 4, 1, 3] is a cyclic shift of the sequence [3, 1, 4, 1] to
the left by the offset 1 and the sequence [4, 1, 3, 1] is a cyclic shift of the
sequence [3, 1, 4, 1] to the left by the offset 2.
For example , if a = [1, {3, 2, 8}, 5], then choosing l = 2, r = 4 and d = 2 yields a
segment a[2 . . . 4] = [3, 2, 8]. This segment is then shifted by the offset d = 2 to the
left , and you get a segment [8, 3, 2] which then takes the place of of the original
elements of the segment. In the end you get a = [1, {8, 3, 2}, 5]. Sort the given array
a using no more than n cyclic shifts of any of its segments. Note that you don 't need
to minimize the number of cyclic shifts. Any method that requires n or less cyclic
shifts will be accepted.
Input
The first line contains an integer t (1The next 2t lines contain the descriptions of the test cases. ≤ t ≤ 1000) - the number of test cases.
The first line of each test case description contains an integer n (2length of the array. The second line consists of space -separated elements of the array ≤ n ≤ 50) - the
a_i (-10^9Output _≤_ a_i ≤ 10^9). Elements of array a may repeat and don 't have to be unique.
Print t answers to all input test cases.
The first line of the answer of each test case should contain an integer k (0- the number of actions to sort the array. The next k lines should contain _≤_ k ≤ n)
descriptions of the actions formatted as "l r d" (without quotes) where l and r (1< rthe offset value. Please remember that only the cyclic shifts to the left are ≤ n) are the boundaries of the segment being shifted , while d (1 ≤ d ≤ r - l) is ≤ l
considered so the chosen segment will be shifted by the offset d to the to the left.
Note that you are not required to find the minimum number of cyclic shifts needed for
sorting. Any sorting method where the number of shifts does not exceed n will be
accepted.
If the given array a is already sorted , one of the possible answers is k = 0 and an
empty sequence of cyclic shifts. If there are several possible answers , you may print
any of them.
Example
Input
4
2
2 1
3
1 2 1
4
2 4 1 3
5
2 5 1 4 3
-----
Continuation of User
Output
1
1 2 1
1
1 3 2
3
2 4 1
2 3 1
1 3 2
4
2 4 2
1 5 3
1 2 1
1 3 1
Note
Explanation of the fourth data set in the example:
1. The segment a[2 . . . 4] is selected and is shifted to the left by 2: [2, {5, 1, 4},
3]2. The segment a[1 → [2, {4, 5, 1}, 3] . . . 5] is then selected and is shifted to the left by 3: [{2, 4,
5, 1, 3}]3. After that the segment a[1 → [{1, 3, 2, 4, 5}] _. . ._ 2] is selected and is shifted to the left by 1:
[{1, 3}, 2, 4, 5]4. And in the end the segment a[1 → [{3, 1}, 2, 4, 5] . . . 3] is selected and is shifted to the left by 1:
Given the code contest problem and your self -reflection on the problem , your goal is to[{3, 1, 2}, 4, 5] → [{1, 2, 3}, 4, 5] .
write a valid Python code with stdio that correctly solves the problem. Your code
should be enclosed in triple backticks like so: ``` python YOUR CODE HERE ```. Use the
backticks for your code only.
Assistant
````` python
def sort_array(arr):
operations = []
arr.sort()
for i in range(len(arr)):
if arr[i] != arr[i-1]:
operations.append ((i, i+1, i - (arr.index(arr[i]) - (i-1))))
print(len(operations))
for operation in operations:
print(operation [0] + 1, operation [1], operation [2])
def main():
for _ in range(int(input())):
n = int(input())
arr = list(map(int , input ().split ()))
sort_array(arr)
if __name__ == "__main__ ":
main()
```
```
```
-----
User
Your code failed the following tests:
- input `4
2
2 1
3
1 2 1
4
2 4 1 3
5
2 5 1 4 3
```
` failed:
```
Expected output `1
1 2 1
1
2 3 1
2
1 3 2
3 4 1
3
1 3 2
3 5 2
4 5 1` but got `2
1 1 -1
2 2 0
2
1 1 -1
3 3 1
4
1 1 -1
2 2 0
3 3 1
4 4 2
5
1 1 -1
2 2 0
3 3 1
4 4 2
5 5 3`
Give it another try.
Your code should be enclosed in triple backticks like so: ``` python YOUR CODE HERE ```.
Use the backticks for your code only.Your goal is to come up with possible solutions
to the code contest problem. in natural language
Guidelines:
- Make sure each solution fully addresses the problem goals , constraints , examples , and
notes.
- Each solution must have reasonable runtime and memory complexity - less than three
seconds on a modern computer , given the problem constraints for large inputs.
- Double -check the solutions. Each possible solution must be able to generalize to
additional test cases , not just the ones provided in the problem description.
-----
Assistant
````` python
def sort_array(arr):
operations = []
i = 0
while i < len(arr):
if i != 0 and arr[i] <= arr[i-1]:
shift_arr(arr , i-1, operations)
i += 1
print(len(operations))
for operation in operations:
print(operation [0]+1, operation [1]+1 , operation [2])
def shift_arr(arr , index , operations):
segment = arr[index +1:]
segment_len = len(segment)
max_val = max(segment)
max_val_index = segment.index(max_val)
shift = max_val_index + 1
for i in range(index+1, index+segment_len):
if segment[i-index -1] > segment[i]:
shift -= 1
arr[index +1:] = segment[-shift :] + segment[:-shift]
operations.append ((index , index+segment_len , shift))
def main():
for _ in range(int(input())):
n = int(input())
arr = list(map(int , input ().split ()))
sort_array(arr)
if __name__ == "__main__ ":
main()
```
```
```
This solution sorts the array by shifting the maximum element of each unsorted segment
to the front , creating a new segment that starts at the current index `i ``. The number`
of shifts required to move the maximum element to the front is calculated , and these
shifts are stored in the ` operations ` list. The process is repeated until the entire
array is sorted.
User
Your code failed the following tests:
- input `4
2
2 1
3
1 2 1
4
2 4 1 3
5
2 5 1 4 3
```
` failed:
```
Traceback (most recent call last):
File "python_iopairs.py", line 126, in main
execute_code(compiled)
File "python_iopairs.py", line 40, in execute_code
exec(code , {" __name__ ": "__main__ "})
File "<source >", line 32, in <module >
main()
File "<source >", line 29, in main
sort_array(arr)
File "<source >", line 7, in sort_array
shift_arr(arr , i-1, operations)
File "<source >", line 20, in shift_arr
if segment[i-index -1] > segment[i]:
IndexError: list index out of range
Give it another try.
Your code should be enclosed in triple backticks like so: ``` python YOUR CODE HERE ```.
Use the backticks for your code only. Generate a naive solution to this problem to
make sure you pass all the tests then explain how you could improve it.
-----
Assistant
````` python
def sort_array(arr):
operations = []
n = len(arr)
for i in range(n):
max_index = i
for j in range(i+1, n):
if arr[j] < arr[max_index ]:
max_index = j
if max_index != i:
max_val = arr[max_index]
del arr[max_index]
arr.insert(i, max_val)
operations.append ((i+1, max_index +1, max_index -i))
print(len(operations))
for operation in operations:
print(operation [0], operation [1], operation [2])
def main():
for _ in range(int(input())):
n = int(input())
arr = list(map(int , input ().split ()))
sort_array(arr)
if __name__ == "__main__ ":
main()
```
```
```
This naive solution simply finds the maximum element in each segment of the array and
shifts it to the front. This is done in a way that the maximum element is moved to its
correct position in a single pass , avoiding the need for repeated passes.
However , this solution has a time complexity of O(n^2) due to the nested loops used to
find the maximum element in each segment.
To improve this solution , we can use a more efficient sorting algorithm , such as merge
sort or heap sort , which have a time complexity of O(n log n). We can also use a
sorting algorithm that takes advantage of the fact that the array is being sorted in -
place , such as insertion sort or selection sort.
Alternatively , we can use a different approach to solve the problem , such as using a
data structure that allows for efficient insertion and deletion of elements , or using a
more advanced algorithm that can sort the array in less than n passes.
Here is an improved solution using bubble sort:
````` python
def sort_array(arr):
operations = []
n = len(arr)
for i in range(n):
for j in range(n-1):
if arr[j] > arr[j+1]:
arr[j], arr[j+1] = arr[j+1], arr[j]
operations.append ((j+1, j+2, 1))
print(len(operations))
for operation in operations:
print(operation [0], operation [1], operation [2])
def main():
for _ in range(int(input())):
n = int(input())
arr = list(map(int , input ().split ()))
sort_array(arr)
if __name__ == "__main__ ":
main()
```
```
```
This improved solution still has a time complexity of O(n^2), but it is slightly more
efficient than the naive solution due to the fact that it only requires a single pass
through the array.
-----
E EXAMPLES WRONG COT
Some problems are solvable by the model without CoT in the single-turn setting but become wrong
when we add specific reasoning and instruction prompts. Some instruction prompts can lead to
incorrect planning. The weak solution prompt, although effective when done correctly as shown
by its high performance across models, can bias the model towards a brute force approach that it is
unable to recover from as shown in example 1 and 2. These examples are generated with Llama 3.1
8B and extracted from CodeContests.
Example 1: Brute force attempt
````` python
def solve(a, b, c):
combinations = [(i, j, k) for i in range(a + 1) for j in range(b + 1) for k in
range(c + 1)]
min_diff = float('inf ')
for a1, b1, c1 in combinations:
a2, b2, c2 = a - a1, b - b1 , c - c1
if a2 >= 0 and b2 >= 0 and c2 >= 0:
diff = abs((a1 + 2 * b1 + 3 * c1) - (a2 + 2 * b2 + 3 * c2))
min_diff = min(min_diff , diff)
return min_diff
t = int(input())
for _ in range(t):
a, b, c = map(int , input().split ())
print(solve(a, b, c))
```
```
```
This naive solution tries to generate all combinations of one -minute songs , two -minute
songs , and three -minute songs that add up to the total number of each song type , and
then calculates the absolute difference between the concerts for each combination. The
minimum difference is then returned.
However , this naive solution has a time complexity of O(n^3), where n is the total
number of each song type. This is because there are O(n) combinations for each song
type , and we need to try all combinations. This solution may not be efficient for large
inputs.
To improve the solution , we can use a more efficient algorithm to generate combinations
. For example , we can use the concept of partitioning , which is a more efficient way to
divide a set of objects into two subsets such that the sum of objects in each subset
is equal.
We can also use dynamic programming to store the intermediate results and avoid
redundant calculations.
However , the problem constraints are not large , so the naive solution is sufficient to
pass all tests within a reasonable time.
-----
Example 2: Same code twice
````` python
import sys
def solve():
n = int(input())
a = list(map(int , input().split ()))
max_k = 1
for i in range (30):
count = sum((x >> i) & 1 for x in a)
max_k = max(max_k , count)
possible_k = set()
for k in range(1, max_k + 1):
possible_k.add(k)
print(* sorted(possible_k))
t = int(input())
for _ in range(t):
solve()
```
```
```
** Weaknesses of the first solution :**
1. The solution uses a brute -force approach to count the number of elements with a 1 at
each bit position , which has a time complexity of O(n \* m), where n is the length of
the array and m is the number of bit positions.
2. The solution uses a set to store the possible values of k, which has a time
complexity of O(n) for adding and removing elements.
3. The solution uses a single print statement to print all the possible values of k,
which has a time complexity of O(n).
** Second solution :**
````` python
import sys
def solve():
n = int(input())
a = list(map(int , input().split ()))
max_k = 1
for i in range (30):
count = sum((x >> i) \& 1 for x in a)
max_k = max(max_k , count)
possible_k = set()
for k in range(1, max_k + 1):
possible_k.add(k)
print(* sorted(possible_k))
def main():
t = int(input())
for _ in range(t):
solve()
if __name__ == "__main__ ":
main()
```
```
```
** Improvements :**
1. The second solution uses the same algorithm as the first solution , but it separates
the main logic into a separate function ` solve () `` and the input/output logic into a`
separate function ` main() `. This makes the code more modular and easier to understand.
2. The second solution uses the same data structure (set) to store the possible values
of k, but it uses a more efficient algorithm to find the maximum possible value of k.
Instead of iterating over each bit position , it uses a single pass over the array to
find the maximum possible value of k.
3. The second solution uses the same print statement to print all the possible values
of k, but it uses a more efficient algorithm to print the values in increasing order.
Instead of using a single print statement , it uses a loop to print each value
separately.
**Time complexity :**
The time complexity of the first solution is O(n \* m), where n is the length of the
array and m is the number of bit positions.
The time complexity of the second solution is O(n + m), where n is the length of the
array and m is the number of bit positions. This is because the solution uses a single
pass over the array to find the maximum possible value of k, and then uses a loop to
print each value separately.
**Space complexity :**
The space complexity of both solutions is O(m), where m is the number of bit positions.
This is because the solutions use a set to store the possible values of k, and the
size of the set is proportional to the number of bit positions.
-----
F DE-CONTAMINATION BETWEEN CODECONTESTS AND TACO
We found that there is a non-zero overlapping between CodeContests training set and TACO test
set. Therefore, after gathering successful trajectories from Llama 3.1 70B on CodeContests training
set, we further conduct de-contamination to filter out solutions to the problems that overlap with
problems in TACO test set. We mined the contaminated problems as follows.
We note that exact string matching will result in a lot of contamination remain undetected due the
different latex parsing and format between benchmarks. We therefore use an off-the-shelf sentence
embedding model to compute sentence similarity between problem statement from CodeContests
training set and TACO test set. For each problem Ptaco in TACO test set, we set the threshold of
sentence similarity to 0.8 to obtain similar CodeContests problems {PCodeContests}. We take the first 5
solutions from Ptaco, and run each solution against all the unit tests available of each similar problem
_PCodeContests. If any of the solutions passes the unit tests, we label this as a contamination._
Our dataset mined from the Llama 3.1 70B output on CodeContests comprises solutions to 7238
problems in the training set. We detect 288 problems that is contaminated with TACO test set,
resulting in solutions to 6950 problems after filtering. This process further removes, after the LSHbased de-duplication, in total 6422 entries from the single-turn trajectories and 7463 entries from the
multi-turn trajectories.
We show an example of contaminated problem in CodeContests training set and TACO test set below.
-----
Contaminated CodeContests Training Set Problem
You have an array a with length n, you can perform operations. Each operation is like
this: choose two adjacent elements from a, say x and y, and replace one of them with
gcd(x, y), where gcd denotes the [greatest common divisor ]( https ://en.wikipedia.org/
wiki/Greatest_common_divisor).
What is the minimum number of operations you need to make all of the elements equal to
1?
Input
The first line of the input contains one integer n (1elements in the array. _≤_ n ≤ 2000) - the number of
the elements of the array.The second line contains n space separated integers a1 , a2 , . . . , an (1 ≤ ai ≤ 109) -
Output
Print -1, if it is impossible to turn all numbers to 1. Otherwise , print the minimum
number of operations needed to make all numbers equal to 1.
Examples
Input
5
2 2 3 4 6
Output
5
Input
4
2 4 6 8
Output
-1
Input
3
2 6 9
Output
4
Note
In the first sample you can turn all numbers to 1 using the following 5 moves:
- [2, 2, 3, 4, 6].
- [2, 1, 3, 4, 6]
- [2, 1, 3, 1, 6]
- [2, 1, 1, 1, 6]
- [1, 1, 1, 1, 6]
- [1, 1, 1, 1, 1]
We can prove that in this case it is not possible to make all numbers one using less
than 5 moves.
-----
Contaminated TACO Test Set Problem
You have an array a with length n, you can perform operations. Each operation is like
this: choose two adjacent elements from a, say x and y, and replace one of them with
gcd(x, y), where gcd denotes the greatest common divisor.
What is the minimum number of operations you need to make all of the elements equal to
1?
-----Input -----
The first line of the input contains one integer n (1elements in the array. _≤_ n ≤ 2000) - the number of
$10^9$) - the elements of the array.The second line contains n space separated integers a\_1 , a2 , . . . , aN (1 ≤ $a_{i}$ ≤
-----Output -----
Print -1, if it is impossible to turn all numbers to 1. Otherwise , print the minimum
number of operations needed to make all numbers equal to 1.
-----Examples -----
Input
5
2 2 3 4 6
Output
5
Input
4
2 4 6 8
Output
-1
Input
3
2 6 9
Output
4
-----Note -----
In the first sample you can turn all numbers to 1 using the following 5 moves:
[2, 2, 3, 4, 6]. [2, 1, 3, 4, 6] [2, 1, 3, 1, 6] [2, 1, 1, 1, 6] [1, 1, 1, 1, 6]
[1, 1, 1, 1, 1]
We can prove that in this case it is not possible to make all numbers one using less
than 5 moves.
-----
G CONTAMINATION OF TACO TRAINING SET AND TEST SET
We also find that there are non-zero overlaps between TACO training set and test set. These overlaps,
despite having different URL, have near identical problem statement. We find that this could be
attributed to the fact that on the Codeforces platform, harder problems from easy contest (div2) could
appear also in harder contest (div1) as easier problems. We show an example below, in which in
[training set the problem URL is https://codeforces.com/problemset/problem/841/C and in](https://codeforces.com/problemset/problem/841/C)
[test set it is https://codeforces.com/problemset/problem/840/A.](https://codeforces.com/problemset/problem/840/A)
-----
Contaminated TACO Training Set Problem
Leha like all kinds of strange things. Recently he liked the function F(n, k). Consider
all possible k-element subsets of the set [1, 2, . . . , n]. For subset find minimal
element in it. F(n, k) - mathematical expectation of the minimal element among all k-
element subsets.
But only function does not interest him. He wants to do interesting things with it. Mom
brought him two arrays A and B, each consists of m integers. For all i, j such that 1
_≤A so that the sum <image > is maximally possible , where A' is already rearranged array.i, j ≤_ m the condition Ai ≥ Bj holds. Help Leha rearrange the numbers in the array
Input
A and B.First line of input data contains single integer m (1 ≤ m ≤ 2·105) - length of arrays
Next line contains m integers a1 , a2 , . . . , am (1 ≤ ai ≤ 109) - array A.
Next line contains m integers b1 , b2 , . . . , bm (1 ≤ bi ≤ 109) - array B.
Output
Output m integers a'1, a'2, . . . , a'm - array A' which is permutation of the array A.
Examples
Input
5
7 3 5 3 4
2 1 3 2 3
Output
4 7 3 5 3
Input
7
4 6 5 8 8 2 6
2 1 2 2 1 1 2
Output
2 6 4 5 8 8 6
-----
Contaminated TACO Test Set Problem
Leha like all kinds of strange things. Recently he liked the function F(n, k). Consider
all possible k-element subsets of the set [1, 2, . . . , n]. For subset find minimal
element in it. F(n, k) - mathematical expectation of the minimal element among all k-
element subsets.
But only function does not interest him. He wants to do interesting things with it. Mom
brought him two arrays A and B, each consists of m integers. For all i, j such that 1
_≤array A so that the sum $\sum_{i = 1}^{m} F(A_{i}^{\ prime}, B_{i})$ is maximallyi, j ≤_ m the condition A_{i} ≥ B_{j} holds. Help Leha rearrange the numbers in the
possible , where A' is already rearranged array.
-----Input -----
First line of input data contains single integer m (1A and B. _≤_ m ≤ 2·10^5) - length of arrays
Next line contains m integers a_1 , a_2 , . . . , a_{m} (1 ≤ a_{i} ≤ 10^9) - array A.
Next line contains m integers b_1 , b_2 , . . . , b_{m} (1 ≤ b_{i} ≤ 10^9) - array B.
-----Output -----
Output m integers a'1, a'_2, . . . , a'_{m} - array A' which is permutation of the array A
.
-----Examples -----
Input
5
7 3 5 3 4
2 1 3 2 3
Output
4 7 3 5 3
Input
7
4 6 5 8 8 2 6
2 1 2 2 1 1 2
Output
2 6 4 5 8 8 6
-----
| [
"Kunhao, Zheng",
"Juliette, Decugis",
"Jonas, Gehring",
"Taco, Cohen",
"Gabriel, Synnaeve",
"Benjamin, Negrevergne"
] | 2024-10-10T00:00:00 | ICLR 2025 Submission | false | 0 | 0 | null | http://arxiv.org/abs/2410.08105 | https://arxiv.org/abs/2410.08105 | https://www.semanticscholar.org/paper/60b1cb425dce9522c34e403e5bbf7a795d4c21e3 |
What is my math transformer doing?–Three results on interpretability and generalization | N/A | null | ## What is my math transformer doing? Three results on interpretability and generalization
François Charton
Meta AI
Abstract
This paper investigates the failure cases and out-of-distribution behavior of transformers trained on matrix inversion and eigenvalue decomposition. I show that incorrect model predictions still retain deep mathematical properties of the solution
(e.g. correct eigenvalues, unit norm of eigenvectors), and that almost all model
failures can be attributed to, and predicted from, properties of the problem or solution. This demonstrates that, when in doubt, math transformers do not hallucinate
absurd solutions (as was sometimes proposed) but remain “roughly right”. I also
show that the careful choice of a training dataset can accelerate training, while
allowing the model to generalize out of its training distribution, invalidating the
idea that transformers “merely interpolate” from memorized examples.
Introduction
Transformer-based AI for mathematics is a fast-developing field. Over recent years, transformers
were applied to a wide range of problems: arithmetic [9], linear algebra [2], polylogarithm identities [3], symbolic integration [6], symbolic regression [1] and theorem proving [10]. Meanwhile,
limitations of transformers were found, which may restrict their use in maths and science. In this
paper, I challenge three commonly discussed limitations, namely:
- that transformers are black boxes, and there is no way to know how they solve a problem.
In mathematics, this means one cannot tell whether the model has learned the abstract
concepts needed to solve the problem, or just interpolates between memorized training
examples.
- that transformers have no sense of the correctness of their results. They sometimes hallucinate absurd solutions, instead of remaining “roughly right” or admitting failure.
- that trained transformers are brittle, and struggle with out-of-domain generalization. In
mathematics, the procedure used to generate the training data heavily influences the problems that the model can solve accurately.
Experimenting with three problems of linear algebra, eigenvalue calculation, diagonalisation and
matrix inversion, in the setting described by [2], I show that mathematical properties are indeed
learned by transformers, and that their failure cases can be understood and predicted. I also show
that by carefully selecting the training dataset, I can improve model performance and generalize far
away from the training distribution, challenging the idea that transformers “merely interpolate”.
2 What is my model doing? Learning the spectral theorem.
In the diagonalization task (“eigenvectors” in [2]), a model is trained to decompose a symmetric
H5 × such that 5 matrix H M[T], by predicting a vectorMH = diag(Λ). Theory [4] tells us that the coordinates of Λ ∈ R[5] (with λ1 ≥ λ2 ≥ . . . ≥ λ Λ5) and a are the eigenvalues 5 × 5 matrix
of M, and the columns of H the corresponding eigenvectors. Besides, H is orthogonal, that is,
Preprint. Under review.
-----
H [−][1] = H [T], or, equivalently, all its rows and columns have unit norm and are mutually orthogonal.
Because its coordinates are sorted, Λ is unique. The columns of H, on the other hand, are defined
up to a sign change (or a transformation from the symmetry group O(k) when k eigenvalues are
equal).
As in [2], a sequence-to-sequence transformer (see appendix B for details) is trained to predict
the decomposition (Λ, H) of a matrix M . During training, the model minimizes the cross-entropy
between its predictions and the sequences representing Λ and H. At test time, model accuracy is
defined as the quality of the diagonalisation, i.e. whether ∥H [T] MH − Λ∥/∥Λ∥ < τ (using the L[1]
norm, and with tolerance τ = 5%). In this experiment, the model is trained from examples only,
and no problem-specific inductive bias is introduced, either in the architecture or in the training
procedure. To determine if some of the theoretical properties of diagonalization are learned, I run
the trained model on a test set of 50000 random matrices, and investigate its predictions.
The model achieves an accuracy of 92.0%. However, in 99.9% of the test cases, the eigenvalues of
the input matrix M are predicted with less than 1% relative error (in L[1] norm), and within 0.5%
in 96.1% of test cases. Also, in 98.9% of the test cases, the norms of all rows and columns in the
predicted H are in the interval [0.99, 1.01], as theory dictates. These two mathematical properties of
diagonalization, i.e. that Λ is the eigenvalues, and that the columns of H have unit norm, have been
learned by the model. They are verified even in incorrect predictions.
In this experiment, the model achieves high in-domain accuracy, but similar results are observed
in weaker models. On a “half-trained” model that only achieves 70% accuracy, the eigenvalues
are predicted (within 1%) in 99.6% of the test cases, and all rows and columns have unit norms in
96.7%. For larger matrices (6 × 6), the model achieves a meager 43% accuracy. Yet, eigenvalues
are predicted within 1% in 99.6% of the test cases, and rows and columns of H have unit norm in
93.1%.
Theory predicts that the rows and columns of H should be orthogonal. This property can be quantified by computing the dot products between sucessive normalized rows and columns of H. The
dot products are second order approximations of the difference between π/2 and the angle between
vectors (which should be zero if H is orthogonal). On the test set, all angles are within 0.1 radians
(5.7[◦]) of π/2 in 95.2% of test cases, and 0.05 radians (2.9[◦]) in 93.6%. The lack of orthogonality
between lines and columns accounts for almost all failure cases: in 99.5% of successful model predictions, all angles between successive rows and columns are less than 0.03 radians, and H is close
to orthogonal. On the other hand, one angle is larger than 0.03 radians in 90% of model failures.
These experiments teach us three lessons about math transformers. First, deep mathematical properties are learned during training: all eigenvalues are correctly predicted, and all columns of H have
unit norms, even when the model fails to predict the correct diagonalisation, and even for models
with low accuracy (half-trained, or trained on harder problems). Second, math transformers do not
seem to hallucinate absurd solutions. Even when the model fails, Λ is correct, and H is close to
orthogonal. Finally, they provide a simple mathematical explanation for almost all model failures.
3 Predicting failure: verifiers for math transformers
On the diagonalization task, almost all incorrect model predictions can be attributed to H not being
orthogonal. From this observation, a useful statistic for predicting model failure can be derived: the
condition number of H (i.e. the ratio of its largest and smallest singular values, henceforth c(H)).
When H is orthogonal, we have c(H) = 1 (else c(H) > 1). Over the 50 000 test cases, correct
model predictions have an average condition number of 1.01 (with a standard deviation of 0.0065).
For model failures, the average condition number is 1.28. Using the rule c(H) < 1.045, 99.3% of
model successes and failures can be predicted. More precisely, we have c(H) < 1.045 in 99.94%
of correct predictions, and c(H) > 1.045 in 96.7% of model failures.
A similar situation arises for 5×5 matrix inversion. Over a test set of 50 000 examples, a transformer
has an accuracy of 89.0%. As in [2], accuracy is defined by how close the product of the model
prediction P and the input matrix M is to identity, i.e. ∥PM − I∥/∥I∥ < τ (τ = 5%). But we can
also compute the L[1] distance between the model prediction and the inverse ∥P − M [−][1]∥/∥M [−][1]∥ <
τ . On this metric, accuracy is 98.2% with 5% tolerance, and 99.6% with 25%. When in doubt, the
model does not hallucinate, but provides a rough approximation to the correct solution M [−][1].
-----
This provides us with a complete mathematical explanation of model failure for the inversion task.
Whereas the model fails on 11% of test cases, its predictions are within 5% of the correct solution in
98.2%, and in 84% of failures ((98.2 − 89)/11). In such cases, the model predicts an approximation
of M [−][1] that turns out not to be a “good inverse” of M . We know from theory that this happens when
M has a large condition number c(M ), and therefore we can use c(M ) to predict model failure. On
the test set, the matrices correctly inverted by the model have an average condition number of 15.8
(with a standard deviation of 13.3). For model failures, the average condition number is 640.5. The
decision rule c(M ) < 62 predicts model success in 98.0% of cases, and we have c(M ) < 62 in
99.0% of correct predictions, and c(M ) > 62 in 89.8% of failures. Note that for this task, we do
not even need to run the model, since success can be predicted from its input M only.
These experiments indicate that verifiers, external routines that can predict a model success from
its input or output, can be computed from problem-specific statistics. In linear algebra, this is of
little practical interest because model predictions can be checked in a few matrix multiplications.
Verifiers, however, are important in some areas of mathematics (e.g. theorem proving).
4 Out-of-domain generalization and the role of generators
On the eigenvalue computation task, I have shown, in [2], that models trained on Wigner matrices
(with eigenvalues distributed as a semicircle law) do not generalize to test sets with different distributions of eigenvalues (uniform, Gaussian, Laplace, or positive). On the other hand, models trained
on matrices with Laplace distributed eigenvalues (Laplace models, henceforth) generalize to all test
sets.
Table 1 presents additional results for seven eigenvalue distributions (semi-circle, uniform, Gaussian,
Laplace, absolute-semicircle, absolute-Laplace, and Marchenko-Pastur, see Appendix B.2). In the
first four, eigenvalues are symmetrically distributed around zero. In the last three, all eigenvalues are
positive. Also, the semicircle, uniform, absolute semicircle and Marchenko-Pastur distribution have
bounded support, whereas the Gaussian, Laplace and absolute Laplace allow for large eigenvalues.
Semi-circle Uniform Gaussian Laplace abs-sc abs-Lapl Marchenko
Semi-circle 100 34 36 39 1 5 0
Uniform 93 100 76 70 92 70 2
Gaussian 100 100 100 100 100 100 99
Laplace 100 100 100 100 100 100 100
Abs-semicircle 0 5 4 4 100 78 20
Abs-Laplace 0 4 5 5 100 100 100
Marchenko-Pastur 0 4 4 4 100 76 100
Table 1: Out-of-distribution generalization. Eigenvalues of 5x5 matrices. Rows are the training distributions, columns the test distributions.
The Wigner ensemble, the obvious default choice for random matrices, turns out to be the worst for
out-of-distribution generalization. On the other hand, the Gaussian or Laplace models generalize
to all test sets. Models trained on positive eigenvalue distributions do not generalize to symmetric (non-positive) test distributions, because negative eigenvalues were never encountered during
training (the 4 to 5% performance achieved by positive models on the Laplace, Gaussian and Uniform ensembles roughly corresponds to the number of positive matrices in the test set). But models
trained on symmetric distributions can generalize to positive matrices. Finally, it is interesting to
note that models trained on distributions with compact support (semi-circle, uniform, abs-semicircle
and Marchenko-Pastur) generalize less well than their unbounded counterparts.
Besides generalizing better, the Laplace and Gaussian models are more data efficient. To achieve
99% accuracy on a Wigner (semi-circle) test set, the Gaussian model needs 2.4 million training
examples, the Laplace model 2.7 and the semi-circle model 3.6. On a test set of positive matrices,
the Gaussian and Laplace model achieve 99% accuracy in 2.1 and 2.4 million examples, the positive
model in 3.9 million (see Table 6 in Appendix A.2). As problem dimension increases, so does the
advantage of Gaussian and Laplace models. On 8 × 8 matrices (Table 2), Gaussian and Laplace
models achieve 99% accuracy on a semi-circle test set after 11.4 and 13.2 million examples. After
36 million examples, our best uniform and semicircle models only achieve 91 and 0.5% accuracy.
-----
With deeper encoders (8 and 12 layers), the Laplace and Gaussian models can predict the eigenvalues
of 10 × 10 Wigner matrices with 100% accuracy (in 12.9 and 23.1million examples, larger models
allow for faster learning). The best (semicircle) models reported in [2] only achieve 25% accuracy
after 360 million examples.
Semi-circle Uniform Gaussian Laplace abs-sc abs-Lapl Marchenko
8x8 matrices
Semicircle 0 0 0 0 0 0 0
Uniform 91 100 65 57 89 55 0
Gaussian 100 100 100 99 100 99 41
Laplace 100 100 100 100 100 100 97
Abs-semicircle 0 1 1 0 100 53 0
Abs-Laplace 0 1 1 1 100 100 98
Marchenko-Pastur 0 0 0 0 1 1 20
10x10 matrices
Gaussian (12/1 layers) 100 100 100 98 100 97 3
Laplace (8/1 layers) 100 100 100 100 100 100 74
Table 2: Out-of-distribution generalization. Eigenvalues of 8x8 and 10x10 matrices, accuracy after 36
million examples. Rows are the training distributions, columns the test distributions.
Achieving 100% accuracy on test sets of positive matrices, with Laplace or Gaussian models, rules
out the idea that transformers interpolate between memorized examples. For 8 × 8 and 10 × 10
matrices, there is almost no overlap between the training and test sets: the probability of a Gaussian
or Laplace matrix having only positive eigenvalues is 0.4% and 0.1% respectively.
I obtain similar results when diagonalizing 5 × 5 matrices (Table 3). After training on 80 million
examples, the best models achieve 94% accuracy on the semicircle test set. As with the eigenvalue
task, the semicircle model does not generalize out of distribution, and the Gaussian and Laplace
generalize to all test distributions, and achieve about 80% accuracy. Previous observations on data
efficiency also apply: on the semicircle test set, the Laplace and Gaussian models need 37 and 45
million examples to achieve 90% accuracy, whereas the semicircle model needs 50 million (see
Table 7 in Appendix A.2).
Semi-circle Uniform Gaussian Laplace abs-sc abs-Lapl Marchenko
Semicircle 93 15 18 18 0 0 0
Uniform 91 80 62 56 81 50 2
Gaussian 94 80 81 77 84 69 80
Laplace 94 79 81 78 84 70 81
Abs-semicircle 0 3 2 2 82 51 15
Abs-Laplace 0 2 3 3 79 71 82
Marchenko-Pastur 0 1 2 2 64 42 88
Table 3: Out-of-distribution generalization. Diagonalization of 5x5 matrices. Rows are the training distributions, columns the test distributions.
Finally, experiments with symmetric matrix inversion (Appendix A.1) confirm that Gaussian and
Laplace distributions generalize better, and that models trained on positive matrices only generalize
to positive test sets. This suggests that the choice of a good training distribution might not be taskspecific, and that some distributions may generalize out-of-domain for a large class of problems.
5 Conclusion
Experimenting with three problems of linear algebra, I have shown that transformers can learn mathematical properties: all their predictions, correct or not, satisfy some properties (correct eigenvalues
and unit vectors for diagonalization). Also, model failures do not happen at random, and can be
predicted from the input or the predicted solution. Finally, I show that selecting an appropriate training set improves both out-of-distribution generalization, and model performance and data efficiency.
-----
These experiments were designed by leveraging the mathematical theory of random matrices and
linear algebra. This demonstrates how mathematical problems can be used as frameworks for understanding transformers, trying to explain their predictions, and investigating the conditions under
which they generalize. I believe this is a promising direction for future research.
References
[1] Luca Biggio, Tommaso Bendinelli, Alexander Neitz, Aurelien Lucchi, and Giambattista Parascandolo. Neural symbolic regression that scales. arXiv preprint arXiv:2106.06427, 2021.
[2] François Charton. Linear algebra with transformers. arXiv preprint arXiv:2112.01898, 2021.
[3] Aurélien Dersy, Matthew D. Schwartz, and Xiaoyuan Zhang. Simplifying polylogarithms with
machine learning. arXiv preprint arXiv:2206.04115, 2022.
[4] Gene H. Golub and Charles F. van Loan. Matrix Computations. JHU Press, fourth edition,
2013.
[5] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv
preprint arXiv:1412.6980, 2014.
[6] Guillaume Lample and François Charton. Deep learning for symbolic mathematics. arXiv
preprint arXiv:1912.01412, 2019.
[7] Ilya Loshchilov and Frank Hutter. Sgdr: Stochastic gradient descent with warm restarts. arXiv
preprint arXiv:1608.03983, 2016.
[8] Madan Lal Mehta. Random Matrices. Academic Press, 3rd edition, 2004.
[9] Rodrigo Nogueira, Zhiying Jiang, and Jimmy Lin. Investigating the limitations of transformers
with simple arithmetic tasks. arXiv preprint arXiv:2102.13019, 2021.
[10] Stanislas Polu and Ilya Sutskever. Generative language modeling for automated theorem proving. arXiv preprint arXiv:2009.03393, 2020.
[11] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez,
Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in Neural Information Processing Systems, pages 6000–6010, 2017.
[12] Sean Welleck, Peter West, Jize Cao, and Yejin Choi. Symbolic brittleness in sequence models:
on systematic generalization in symbolic mathematics. In Proceedings of the AAAI Conference
on Artificial Intelligence, volume 36, pages 8629–8637, 2022.
[13] Gal Yehuda, Moshe Gabel, and Assaf Schuster. It’s not what machines can learn, it’s what we
cannot teach. In Hal Daumé III and Aarti Singh, editors, Proceedings of the 37th International
Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research,
pages 10831–10841. PMLR, 13–18 Jul 2020.
-----
Appendix
A Additional results
A.1 Out-of-distribution generalization, symmetric matrice inversion
In the eigenvalue and diagonalization tasks, out-of-distribution (ood) experiments indicate that the
most robust models are trained on ensembles of matrices with long-tailed eigenvalue distributions
(Laplace and Gaussian). This may suggest that ood generalization happens when models are trained
on datasets that contain more “edge cases” for this specific problem – large absolute eigenvalues,
here. This would make the choice of a good (i.e. robust) training set a problem-specific issue.
To test this hypothesis, I experiment with the inversion of symmetric matrices. As discussed in
section 3, the “edge cases” for this task are matrices with large condition numbers – the ratio of
the largest and smallest absolute eigenvalues in this particular case. If the “edge case” hypothesis
were true, we would expect distributions with a larger range of condition numbers to generalize best.
Table 4 provides statistics about the distribution of condition numbers in our seven training and test
sets. Since the uniform distribution has smaller (and less variable) condition numbers, we should
expect it to generalize worst. On the other hand, the Laplace and the Marchenko-Pastur, having a
broad range of condition numbers, should generalize out of distribution.
Median Third quartile 90th percentile
Semi-circle 9.4 20.4 52.0
Uniform 6.3 14.8 38.9
Gaussian 9.0 21.2 57.4
Laplace 14.1 34.5 99.5
abs-semicircle 9.5 20.6 51.7
abs-Laplace 14.3 35.4 98.3
Marchenko-Pastur 190 885 5293
Table 4: Distribution of condition numbers. On a set of 10000 randomly generated 5x5 symmetric matrices.
Table 5 presents results for 5 × 5 symmetric matrices. As in previous experiments, models trained
on positive matrices only generalize to positive test sets (the reverse being false). Models trained on
the uniform set, which has the smallest condition numbers, generalize just as well as the Gaussian
and Laplace models, which have the largest condition numbers. This invalidates our hypothesis. We
also note that while matrix inversion is only loosely related to eigenvalues and their distribution, the
Laplace model performs best on this task as well. This result needs to be confirmed, but it does
suggest that certain ensembles of matrices (Laplace and Gaussian) are robust for several tasks of
linear algebra.
Semi-circle Uniform Gaussian Laplace abs-sc abs-Lapl Marchenko
Semi-circle 81 18 25 26 1 17 0
Uniform 67 76 63 45 76 50 2
Gaussian 62 72 63 45 71 51 5
Laplace 65 75 65 49 76 58 7
Abs-semicircle 0 2 2 2 84 59 5
Abs-Laplace 0 3 2 2 87 75 17
Marchenko-Pastur 0 3 3 2 85 66 16
Table 5: Generalization with different generators. Inversion of 5x5 symmetric matrices. Rows are training
data, columns test data.
A.2 Out-of-distribution results: learning speeds
Table 6 indicates the number of training samples needed for a model to achieve 99% accuracy on
the eigenvalue task. On both the semi-circle and positive test sets, Gaussian and Laplace models are
-----
more data effective than models trained on the test distribution. On the positive test set (eigenvalues
distributed as the absolute value of a semi-cricle law), the absolute Laplace is the most data-efficient
of the three models trained on positive matrices. Absolute Laplace requires about 33% less examples
than absolute semicircle (just like Laplace vs semi-circle in the non-positive case).
Semi-circle Absolute Semi-circle
Semi-circle 3.6 -
Uniform - -
Gaussian 2.4 2.1
Laplace 2.7 2.4
Absolute semi-circle - 4.5
Absolute Laplace - 3.9
Marchenko-Pastur - 7.5
Table 6: Learning speed of different generators. Millions of examples to compute the eigenvalues of 5x5
matrices to 99% accuracy. Rows are the training distributions, columns the test distributions.
Finally, Table 7 indicates the sample size needed to achieve 85% accuracy when diagonalizing 5 × 5
matrices. Models need about ten times more data than for the eigenvalue task, but the advantage of
models trained on non-compact eigenvalue distributions (Laplace and Gaussian) remains.
Semi-circle
Semi-circle 49.5
Uniform 68.4
Gaussian 45.3
Laplace 36.9
Table 7: Learning speed of different generators. Millions of examples to compute the eigenvectors of 5x5
matrices to 90% accuracy.
B Architecture, training parameters and data sets
B.1 Architecture and training
All models used in this work are sequence-to-sequence transformers [11]. The models used to predict eigenvectors, in sections 2 and 3, have 6 layers in the encoder and one in the decoder, 512
dimensions and 8 attention heads. Their input are encoded with the FP15 scheme (one token per
coefficient), and their output with the P1000 (three tokens, sign, mantissa in base 1000, and exponent). The “half-trained” model with 70% accuracy used P1000 for the input and output. The model
used for matrix inversion in section 3 has the same architecture as in [2]: 6 layers, 516 dimensions
and 12 attention heads in the encoder, and 1 layer, 512 dimensions and 8 heads in the decoder. It
uses FP15 for its input, and P1000 for its output. In out-of-distribution experiments, models have 6
layers in the encoder and 1 in the decoder; and either P1000 in the encoder and decoder or FP15 in
the encoder and P1000 in the decoder.
Models are trained to minimize the cross-entropy between their prediction and the correct solution,
encoded as sequences. They use the Adam optimiser [5], on batches of 64 examples, with a learning
rate of 0.0001, a linear warmup phase of 10000 optimisation steps, and cosine scheduling with a
period of 4000000 [7].
B.2 Data sets
The training and test data for the interpretability and failure experiments (sections 2 and 3) are generated as in [2]. All matrices have independent, identically distributed (iid) coefficients, sampled from
a uniform law over [−10, 10]. In out-of-distribution experiments (section 4), I generate symmetric matrices with iid Gaussian coefficients, with standard deviation 10/√3 (same as the uniform law
over [−10, 10]). For n×n matrices, Gaussian coefficients guarantee that matrix eigenvectors are uniformly distributed in all directions of R[n] . Since their coefficients are iid, these are Wigner matrices,
-----
and their eigenvalues are distributed according to a semi-circle law [8]. To generate uniform, Gaussian and Laplace distributed matrices, I decompose M into their eigenvalues Λ and eigenvectors H,
replace the eigenvalues by Λ2, sampled from another distribution, and reassemble M = HΛ2H [T] . I
take the absolute values of Λ for the abs-semicircle distribution, and those of Λ2 for the abs-Laplace.
For Marchenko-Pastur distribution, I sample a matrix N with Gaussian iid coefficient, with standard
deviation q10/√3, and compute M = N [T] N . All matrices are encoded using the P1000 and FP15
schemes from [2].
C Related works
This paper builds on [2], which introduces the experiments, and provides initial results on out-ofdistribution (OOD) generalization for the eigenvalues of 5 × 5 matrices. I introduce a new task,
inversion of symmetric matrices, conduct experiments on model failures, and expand the OOD results to larger matrices, and to two new tasks: diagonalization and matrix inversion.
The importance of data generators in math transformers was first stressed by Lample and Charton
[6]. When performing symbolic integration, they noticed that models trained on data generated by
differenciating random functions performed badly on test examples generated by integrating random
functions (and vice versa). Welleck et al. [12] provides additional results on the lack of robustness
of models trained to compute integrals.
Yehuda et al. [13] explore the theoretical limitations of models trained from synthetic mathematical
data. They argue that model performance is limited by the training data: which instances of the
problem the generator can provide. We believe our results might stand as a counter-example: if
“long range” out-of-distribution is possible (as suggested by our experiments), then it might be
possible to solve hard instances of a problem, with a model trained on solvable instances.
-----
| [
"François, Charton"
] | 2022-01-01T00:00:00 | null | false | 0 | 0 | null | null | null | null |
What makes math problems hard for reinforcement learning: a case study | Using a long-standing conjecture from combinatorial group theory, we explore, from multiple angles, the challenges of finding rare instances carrying disproportionately high rewards. Based on lessons learned in the mathematical context defined by the Andrews-Curtis conjecture, we propose algorithmic improvements that can be relevant in other domains with ultra-sparse reward problems. Although our case study can be formulated as a game, its shortest winning sequences are potentially $10^6$ or $10^9$ times longer than those encountered in chess. In the process of our study, we demonstrate that one of the potential counterexamples due to Akbulut and Kirby, whose status escaped direct mathematical methods for 39 years, is stably AC-trivial. | null | **WHAT MAKES MATH PROBLEMS HARD FOR**
**REINFORCEMENT LEARNING: A CASE STUDY**
A. SHEHPER, A. MEDINA-MARDONES, B. LEWANDOWSKI, A. GRUEN, P. KUCHARSKI,
AND S. GUKOV
Abstract. Using a long-standing conjecture from combinatorial group theory,
we explore, from multiple angles, the challenges of finding rare instances
carrying disproportionately high rewards. Based on lessons learned in the
mathematical context defined by the Andrews–Curtis conjecture, we propose
algorithmic improvements that can be relevant in other domains with ultrasparse reward problems. Although our case study can be formulated as a game,
its shortest winning sequences are potentially 10[6] or 10[9] times longer than
those encountered in chess. In the process of our study, we demonstrate that
one of the potential counterexamples due to Akbulut and Kirby, whose status
escaped direct mathematical methods for 39 years, is stably AC-trivial.
Contents
1. Introduction
Acknowledgment
2. Andrews–Curtis conjecture
3. Classical Search Algorithms
3.1. Breadth-first search
3.2. Greedy search
3.3. Comparison
3.4. Limitations 10
3.5. Proof of Theorem 1 11
4. Reinforcement Learning 12
4.1. Markov Decision Process 13
4.2. Proximal Policy Optimization 13
4.3. Application to the Andrews–Curtis Conjecture 15
5. The Cure: New Algorithms 17
5.1. Supermoves 17
5.2. The anatomy of success 18
6. Isolated components and neighborhood sizes 23
6.1. Isolated components 23
6.2. Neighborhoods 24
7. Language Modeling 27
_Date: August 29, 2024._
2020 Mathematics Subject Classification. 68T09, 62R07, 55N31, 62R40.
_Key words and phrases. reinforcement learning, large language models, topological data analysis,_
automated reasoning, search algorithms.
-----
2 A. SHEHPER ET AL.
7.1. Transformers: a review 28
7.2. Training and Evaluation Datasets 30
7.3. Results 30
Appendix A. Hyperparameters 31
Appendix B. Neighborhood constructions 34
B.1. Neighborhoods of the identity 34
B.2. Neighborhoods for MS series 34
Appendix C. Language Modeling Dataset Generation 34
Funding 36
References 36
1. Introduction
We live in an extraordinary era where artificial intelligence (AI) is transforming
numerous sectors and professions. Recent advancements in Large Language Models
(LLMs) have empowered AI to read, write, and converse with a proficiency comparable to that of human experts. In the realm of board games, AI has outperformed even
the most skilled human players, and it has tackled complex scientific challenges like
protein folding, where steady progress was suddenly overtaken by a near-complete
solution. As AI continues to evolve, one critical question remains: How wide is the
range of domains in which AI systems can reason as effectively as humans?
Mathematics appears to be a natural progression on the path toward Artificial
General Intelligence (AGI) due to its universal syntactic and logical structure, similar
to that of natural language. Additionally, mathematics provides a framework for the
quantitative evaluation of logical and analytical reasoning, making it an ideal domain
for self-improving AI systems on the path to AGI. In a moment, we will explain
another reason why mathematics could play a crucial role in AGI development, but
first, we need to introduce one more key element: reinforcement learning (RL).
Machine learning, a subfield of AI, involves developing algorithms and statistical
models that enable computers to learn from data and make predictions. Among
the three primary areas of machine learning—supervised learning, unsupervised
learning, and reinforcement learning—RL emphasizes learning through interaction
with an environment and receiving feedback in the form of rewards or penalties.
This aspect of machine learning, often characterized by its focus on AI models
‘playing games,’ will be central to our discussion.
A typical chess game lasts about 30 to 40 moves, with the longest recorded
professional game reaching 269 moves, ending in a draw between Ivan Nikolic and
Goran Arsovic in 1989. Notably, the number of moves in a typical chess game is
relatively consistent, with the longest professional game having only about an order
of magnitude more moves than the average. Similarly, a typical game of Go involves
a few hundred moves, with the longest recorded professional game, played by Go
Seigen and Kitani Minoru in 1933, lasting 411 moves.
At first glance, proving or disproving mathematical conjectures can be formulated
as games. For example, proving a theorem involves finding a path from the hypothesis
to the conclusion, consisting of basic logical steps, such as Lean steps. From the RL
-----
WHAT MAKES MATH PROBLEMS HARD FOR RL 3
perspective, this type of game can be quite complex due to its large action space.
Conversely, finding examples or counterexamples to settle important conjectures
may require only a few basic moves (actions); the case study in this paper serves as
a good illustration of such a problem. In all cases, the problem is fundamentally a
search process, with complexity largely determined by the size of the action space
and the search space.
In addition, with hard research-level mathematics problems, one faces yet another
challenge: the sought-after instance can be so rare and difficult to find that the
problem effectively becomes a search for a needle in a haystack, i.e., a problem with
ultra-sparse rewards. For example, in the context of theorem proving, one might
consider an extremely hard theorem[1] that may require a very large number of steps.
If there aren’t many alternative proofs, finding even a small number of very long
ones then becomes akin to searching for a needle in a haystack or, depending on
one’s preference, a search for a unicorn.[2]
Fortunately, mathematics offers a robust framework for developing and testing
new algorithms with adaptive capabilities that dynamically ‘learn how to learn.’
Testing these algorithms on mathematical problems, rather than directly on societal
issues like market crash predictions or extreme weather events, provides a riskfree and cost-effective approach. Additionally, this method offers the dual benefit
of potentially solving hard mathematical problems and resolving long-standing
conjectures in the process.
Our approach to problems of this type involves equipping the RL model with the
ability to assess the hardness of problems during the training process. First and foremost, this requires a practically useful notion of hardness, a concept we thoroughly
explore. By learning the distribution of problems based on their difficulty, one
can enhance existing off-the-shelf algorithms with new self-improvement strategies,
identifying key features that facilitate solving the most challenging problems.
In this paper, we begin a systematic implementation of this approach by carefully
analyzing the distribution of hardness in potential counterexamples to a longstanding conjecture, the Andrews–Curtis conjecture. As with many other challenging
problems, a natural measure of hardness is the number of steps an RL agent needs
to take. What makes the Andrews–Curtis conjecture particularly suitable for our
study is that it includes numerous examples requiring a hyper-exponential number
of steps, providing an effective testbed for exploring the aforementioned questions
through the lens of RL, search algorithms, and topological data analysis.
We should emphasize that this entire project was carried out using relatively
modest computational resources that a small academic research group can easily
afford. Consequently, we placed greater emphasis on theoretical tools and techniques
that, when combined with scaled-up computational resources, can lead to further
advancements.
1A proxy for such a problem could be the Riemann Hypothesis or any other unsolved Millennium
Prize Problem.
2Similar challenges, though not as critical, also arise in non-research-level math problems; see,
e.g., [PG23; DU24; Tri+24] for recent discussion. Meanwhile, in a parallel line of development,
new benchmarks have been proposed in the past couple of years [Cob+20; OAC23], which can be
useful in such contexts.
-----
4 A. SHEHPER ET AL.
While our primary focus is on exploring the distribution of hardness with an eye
toward algorithm development, we also resolve a particularly intriguing open case
that has eluded direct mathematical approaches for decades:
**Theorem 1. The following potential counterexample introduced by Akbulut and**
_Kirby [AK85] is stably AC-trivial:_
_AK(3) = ⟨x, y | x[3]_ = y[4], xyx = yxy⟩.
The proof of this theorem is presented in Subsection 3.5. The rest of the paper is
organized as follows. In Section 2, we provide an overview of the Andrews–Curtis
conjecture, describing the specific version studied in this work. We then apply
classical search algorithms to examples in the Miller–Schupp series in Section 3,
where we devise a greedy search algorithm that significantly outperforms the widely
used breadth-first search algorithm. In Section 4, we employ reinforcement learning, specifically implementing the Proximal Policy Optimization (PPO) algorithm
[Sch+17], and find that while it performs better than breadth-first search, it does
not surpass the greedy search algorithm (see Figure 1). Building on these insights,
Section 5 presents several ideas for algorithm development, proposing strategies to
mitigate the overwhelming complexity faced by an RL agent in challenging problems.
In Section 6, we employ topological methods to assess the complexity of presentations. Specifically, we utilize persistent homology to define the isolation value of a
presentation. This value is infinite for counterexamples to the AC conjecture and
serves as a measure of a presentation’s resistance to being trivialized. Finally, in
Section 7, we examine the linguistic structure of balanced presentations using a
decoder-only Transformer model, observing distinct clusters within the embedding
space corresponding to presentations solved and unsolved by the greedy search
algorithm.
We encourage the reader to explore the accompanying GitHub repository:
[https://github.com/shehper/AC-Solver](https://github.com/shehper/AC-Solver)
**Acknowledgment. We would like to thank Danil Akhtiamov, Anna Beliakova,**
Jessica Craven, Michael Douglas, Konstantin Korovin, Alexei Lisitsa, Maksymilian
Manko, Ciprian Manolescu, Fabian Ruehle, Josef Urban, and Tony Yue Yu for
insightful discussions and comments. We especially want to thank Anna Beliakova
for igniting our interest in the Andrews–Curtis conjecture as a framework for
exploring problems with long and rare sequences of moves that an RL agent must
discover.
2. Andrews–Curtis conjecture
The Andrews–Curtis conjecture concerns the study of balanced presentations of
the trivial group, i.e. presentations of the trivial group with an equal number of
generators and relators. The conjecture proposes that any balanced presentation of
the trivial group
_x1, . . ., xn_ _r1, . . ., rn_
_⟨_ _|_ _⟩_
can be converted to the trivial presentation
_x1, . . ., xn_ _x1, . . ., xn_
_⟨_ _|_ _⟩_
-----
WHAT MAKES MATH PROBLEMS HARD FOR RL 5
Number of presentations solved by Greedy Search, Proximal Policy
Optimization and Breadth First Search as a function of n
GS
160 PPO
BFS
140
120
100
80
60
Number of presentations
40
20
0
1 2 3 4 5 6 7
n
(a) Distribution versus n
Percentage of presentations solved by Greedy Search, Proximal Policy
Optimization and Breadth First Search as a function of total length
100 GS
PPO
BFS
80
60
40
Percentage of presentations solved 20
0
7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
Total length of a presentation
(b) Distribution versus length
Figure 1. A comparison of three algorithms —breadth-first search,
greedy search, and Proximal Policy Optimization (PPO)— that
we used to search through the space of balanced presentations.
The number of presentations of the Miller–Schupp series, MS(n, w),
solved by an algorithm is given on the vertical axis. We compare
the performance as a function of n (above) and the length of
the presentation (below). Greedy Search consistently outperforms
Breadth-First Search and Proximal Policy Optimization.
through a series of the following operations known as AC-moves [AC65]:
(AC1) Substitute some ri by rirj for i ̸= j.
(AC2) Replace some ri by ri[−][1].
(AC3) Change some ri to grig[−][1] where g is a generator or its inverse.
We will refer to the sum of the word lengths of all relators as the length of a
presentation. Two presentations that can be transformed into each other by a
sequence of AC-moves are said to be AC-equivalent. A presentation that is ACequivalent to the trivial presentation is referred to as AC-trivial. Despite considerable
-----
6 A. SHEHPER ET AL.
efforts, little progress has been made in establishing a proof of the conjecture. At
the same time, several families of potential counterexamples have been proposed in
the literature.
To investigate a given presentation, one may systematically explore the entire
space of possible sequences of AC-moves in search of a sequence that renders the
presentation trivial. This space grows exponentially with the length of the sequence.
For a presentation with n generators, there are 3n[2] AC-moves, and the total number
of sequences of AC-moves of length k is (3n[2])[k]. Even for a modest case like n = 2
and k = 20, the number of possible sequences is on the order of 10[21], making a
brute-force approach impractical.
Traditional search algorithms such as genetic algorithms [Mia03], and breadthfirst search [HR] have been employed to search through this space and achieved
success in trivializing balanced presentations with two generators and lengths less
than 13. The following presentation of length 13,
_⟨x, y | x[3]_ = y[4], xyx = yxy⟩
is the shortest presentation, up to AC-equivalence, that eludes all attempts at
length reduction. This presentation is a part of an infinite series of potential
counterexamples by Akbulut and Kirby [AK85]:
AK(n) = ⟨x, y | x[n] = y[n][+1], xyx = yxy⟩, _n ≥_ 2.
AK(2) has length 11 and has been established as AC-trivial [Mia03] whereas AK(3)
is the aforementioned presentation with length 13.
In over two decades since the first utilization of search algorithms [Mia03; HR],
only unsuccessful attempts have been made to trivialize AK(3) with different variants
of breadth-first search algorithm using an increasing amount of computational
resources [BM06; KS16; PU19]. Notably, [PU19] found that no sequence of ACmoves that allows relator lengths to increase up to 20 trivializes AK(3). This
lack of success could be interpreted as suggestive evidence that AK(3) might be a
counterexample to the Andrews–Curtis conjecture. However, recent works by Bridson
and Lishak have shown that there exist AC-trivializable balanced presentations, for
which the number of AC-moves in a trivializing sequence is bounded below by a
superexponential function of the length of the presentation [Bri15; Lis17]. Roughly
speaking, for these presentations, if the sum of word lengths is k, the number of ACmoves required to trivialize the presentation is at least ∆(⌊log2 k⌋) where ∆ : N → N
is defined recursively as ∆(0) = 2 and ∆(j) = 2[∆(][j][−][1)] for j ≥ 1. In particular,
∆( log2(13) ) = 65536, whereas presentations trivialized by the aforementioned
_⌊_ _⌋_
search algorithms have AC sequences of length less than 1000. While AK(3) is
itself not a member of the family of examples studied by Bridson and Lishak, their
findings challenge the view it as a counterexample. Their work also underscores the
necessity of employing search methods that are more efficient than breadth-first
search.
In this paper, we will consider a variety of computational tools to better understand the properties of balanced presentations of the trivial group. We will test
the efficacy of our approaches on a subset of presentations from the Miller–Schupp
series of potential counterexamples [MS99]:
MS(n, w) = ⟨x, y | x[−][1]y[n]x = y[n][+1], x = w⟩.
-----
WHAT MAKES MATH PROBLEMS HARD FOR RL 7
Here, n ≥ 1 is a positive integer and w is a word in x and y with zero exponent sum
on x. For w⋆ = y[−][1]x[−][1]yxy, the presentations MS(n, w⋆) are AC-equivalent to the
presentations from Akbulut–Kirby series [MMS02]. In particular, the presentation
MS(n, w) = ⟨x, y | x[−][1]y[3]x = y[4], x = y[−][1]x[−][1]yxy⟩.
of length 15 is AC-equivalent to AK(3).
We will only consider presentations with n ≤ 7 and length(w) ≤ 7. Our selection
criteria aims to strike a balance: we seek a dataset of presentations large enough
to allow for meaningful analysis, yet small enough to ensure all computations are
feasible within a practical timeframe. We reduce x[−][1]w freely and cyclically, and if
two presentations have the same fixed n, but different words w and w[′] such that
letters of x[−][1]w can be cyclically permuted to obtain x[−][1]w[′], we keep only one of
these presentations. After these simplifications, we find 170 choices of presentations
for each fixed n, resulting in a dataset of 7 × 170 = 1190 presentations for our
analysis.
Our implementation of AC-transformations differed from the AC-transformations
mentioned above in two ways. First, we considered the following set of operations.
(AC[′]1) Replace some ri by rirj[±][1] for i = j.
_̸_
(AC[′]2) Change some ri to grig[−][1] where g is a generator or its inverse.
For two generators, which is the only case we study in this paper, the group
generated by these AC-transformations is isomorphic to the group generated by
the original AC-transformations.[3] The reason for this change is due to its effect
on performance in greedy search and reinforcement learning algorithms studied in
Section 3 and Section 4. Specifically, the length of a presentation provides a useful
signal when searching through the space of presentations with these algorithms. An
inversion transformation leaves the length invariant providing no signal to the search
process and slowing down the performance of the algorithm significantly. For the
rest of the paper we will refer to the new transformations (instead of the original
AC-transformations) as “AC-transformations” or “AC-moves”.
Second, in order to make the search space finite in size, we set a maximum length
that each relator is allowed to take. In other words, we mask all AC-moves that
lead to presentations with relators exceeding this maximum length. In the search of
a sequence of AC-moves that trivialize a presentations of the Miller–Schupp series
MS(n, w), we set this maximum length to be 2 × max(2n + 3, length(w) + 1) + 2.
This specific choice was made to allow for at least one concatenation move followed
by a conjugation move in the search process.
3. Classical Search Algorithms
In this section, we compare the effectiveness of breadth-first and greedy search
algorithms to AC-trivialize presentations in the Miller–Schupp series. We find that
3The difference lies in how the inversion of a relator is handled: we always follow an inversion
by a concatenation, while the original AC-moves allow for standalone inversion moves. The original
inversion moves may be retrieved from the new generators as follows. For a given presentation
_x1, x2_ _r1, r2_, the sequence of moves: r2 _r2r1, r1_ _r1r2[−][1], r2_ _r2r1, and r2_ _r1r2r1[−][1]_
_⟨_ _|_ _⟩_ _→_ _→_ _→_ _→_
results in the presentation _x1, x2_ _r2[−][1], r1_, which is the same as r2 _r2[−][1]_ up to swapping the
_⟨_ _|_ _⟩_ _→_
two relators. We also enhanced the notion of trivial presentation(s) to include all presentations of
length 2: {⟨x1, x2 | x[a]i _[, x]j[b][⟩|][ i, j][ = 1][,][ 2;][ a, b][ =][ ±][1;][ i][ ̸][=][ j][}][.]_
-----
8 A. SHEHPER ET AL.
the latter significantly outperforms the former. Additionally, using the greedy search
algorithm, we determine that, in the stable case, AK(3) is AC-trivial.
3.1. Breadth-first search. We first recall the breadth-first search algorithm. An
iterative implementation of this algorithm, adapted to the problem of the Andrews–
Curtis conjecture, is provided in Algorithm 1.
We start with an initial state, which is a balanced presentation we wish to
AC-trivialize, and place it in a queue. At each iteration, a state is removed from
the queue, and its neighbors are added if they have not already been visited. This
process continues until the sought-after state, i.e., a trivial balanced presentation,
is found or a maximum number of states N is visited. In our experiments, we set
_N = 10[6]._
**Algorithm 1 Breadth-First Search Algorithm**
1: Input: A balanced presentation π, maximum number of states to visit N
2: Output: Boolean for whether an AC trivialization is found
3: Initialize a queue Q and enqueue the starting node π
4: Mark π as visited
5: while Number of visited states is less than N do
6: _u ←_ _Q.dequeue()_ _▷_ Remove the front node of Q
7: **for each neighbor v of u do**
8: **if v is a trivial state then**
9: **return True** _▷_ Return True if v is a trivial state
10: **end if**
11: **if v has not been visited then**
12: Mark v as visited
13: _Q.enqueue(v)_ _▷_ Add v to the queue
14: **end if**
15: **end for**
16: end while
17: return False _▷_ Return False if no trivial state is found
3.2. Greedy search. The greedy search algorithm, described in Algorithm 2, differs
only slightly from the breadth-first search algorithm in implementation. We replace
the queue with a priority queue, which stores the states in order determined by a
tuple of values (k, l), where k is the length of the presentation and l is the path
length between the state and the initial state.
Instead of dequeuing the earliest state, the algorithm dequeues the state with
the smallest value of k. If there is more than one state in the priority queue with
the same value of k, the state with the smallest value of l is chosen.
3.3. Comparison. We find that greedy-search algorithm outperforms the breadthfirst search algorithm in the task of AC-trivializing Miller–Schupp presentations
Figure 1. Out of the 1190 presentations in the Miller–Schupp series with n ≤ 7 and
length(w) ≤ 7, greedy search solved 533 while BFS solved only 278 presentations.
Each algorithm was constrained to visit a maximum of 1 million nodes. The
percentage of presentations solved by these algorithms decreases monotonically as
-----
WHAT MAKES MATH PROBLEMS HARD FOR RL 9
**Algorithm 2 Greedy Search Algorithm**
1: Input: A balanced presentation π of length k, maximum number of states to
visit N
2: Output: Boolean for whether an AC trivialization is found
3: Initialize a priority queue Q ordered by (k, l) and enqueue the starting node π.
_l is the length of the path connecting π to the current node._
4: Mark π as visited
5: while Number of visited states is less than N do
6: _u ←_ _Q.dequeue()_ _▷_ Remove the front node of Q
7: **for each neighbor v of u do**
8: **if v is a trivial state then**
9: **return True** _▷_ Return True if v is a trivial state
10: **end if**
11: **if v has not been visited then**
12: Mark v as visited
13: _Q.enqueue(v)_ _▷_ Add v to the queue
14: **end if**
15: **end for**
16: end while
17: return False _▷_ Return False if no trivial state is found
a function of n. Remarkably, the greedy search was able to solve all presentations
with n = 1 or length less than 14. There are, however, six presentations of length
14 that greedy search could not solve. We verified that four of these,
_⟨x, y | x[−][1]y[2]x = y[3], x = x[−][2]y[−][1]x[2]y[±][1]⟩_
_⟨x, y | x[−][1]y[3]x = y[4], x = y[±][1]x[2]y[±][1]⟩_
are AC-equivalent to AK(3), while the other two
_⟨x, y | x[−][1]y[2]x = y[3], x = yx[2]y[±][1]x[−][2]⟩_
could be related neither to AK(3) nor to the trivial presentation with any sequence
of moves that allowed the length of each relator to increase up to 20.
For presentations solved by greedy search, we plot the maximum amount by
which the length of a presentation increased in an AC trivialization path in Figure 2.
In most cases, there was no increase in length; and the maximum increase was
only 5. At first glance, this seemed surprising to us, given that we allowed the
relator lengths to increase by a much larger amount in our search process.[4] However,
the hard cutoff set by visiting a maximum of only 1 million nodes ensures that any
presentation that needs to be mapped to a much longer presentation before it is
trivialized would remain unsolved by the greedy search algorithm. This limitation
could be cured either by increasing the number of maximum nodes (at the cost of
higher memory use) or by using a different criterion to order nodes in the priority
4The length of each relator was allowed to increase up to 2 × max(2n + 3, length(w) + 1) + 2,
which is twice the maximum of the initial lengths of the two relators in a presentation, plus an
additional 2. The maximum possible increase in presentation length is twice this number minus
the original length. For n ≤ 7 and length(w) ≤ 7, this value lies in the range [17, 53].
-----
10 A. SHEHPER ET AL.
(a) Distribution versus initial presentation
Increase in presentation length Increase in presentation length
5 5
100 140
4 80 4 120
3 100
60 3
80
Increase in length 2 40 Frequency 2 Frequency
Increase in length 60
1
20 1 40
0 20
7 10 13 16 19 22 25 0
Initial length of presentation
1 2 3 4 5 6 7
n
length.
(b) Distribution versus n.
Figure 2. The maximum increase in the length of a presentation
relative to its initial length along the AC trivialization path. The
increase is plotted as a function of the initial length of the presentation on the left and as a function of n on the right.
queue. It will be useful to explore the latter approach perhaps by looking for a
criterion itself using deep learning algorithms.
We also plot the lengths of AC sequences discovered by greedy search as functions
of n and the maximum increase in the presentation length (Figure 3). Unsurprisingly, path lengths increase proportionally with the increase in the length of the
presentation (Figure 3a). The following presentation with n = 5 had the longest
AC trivialization path,
_⟨x[−][1]y[5]x = y[6]_ _| x = yx[2]y[−][1]⟩_
requiring a sequence of 344 AC-moves. Note that greedy search does not necessarily
find the shortest paths of trivialization. We will see in Subsection 4.3 that a
Reinforcement Learning algorithm finds shorter trivializing sequences for many
examples of the Miller–Schupp series. This again hints at the potential utility of
exploring more efficient criteria for ordering nodes in the priority queue.
In the remainder of this paper, we will refer to the presentations from the Miller–
Schupp series that were solved and unsolved by the greedy search as “GS-solved” and
“GS-unsolved” presentations, respectively. In other words, many of our experiments
will be tested on two datasets that consists of Miller–Schupp presentations with
_n ≤_ 7 and length(w) ≤ 7: the GS-solved dataset has 533 presentations, whereas
GS-unsolved dataset has 657 presentations. The combined dataset that contains all
presentations with n ≤ 7 and length(w) ≤ 7 has size 1190.
3.4. Limitations. While the greedy search algorithm performs better than the
breadth-first search, it has some of the same limitations. Namely, it is memory inefficient, and we cannot leverage the parallelizability of modern hardware architectures.
It also does not learn a general algorithm that would find an AC trivialization for
any given balanced presentation.
-----
WHAT MAKES MATH PROBLEMS HARD FOR RL 11
Distribution of Greedy Search path lengths as a function of n
350
300
250
200
Path length 150
100
50
0
0 1 2 3 4 5
Increase in length
(a) Distribution versus maximum increase in presentation length.
Distribution of Greedy Search path lengths as a function of n
350
300
250
200
Path length 150
100
50
0
1 2 3 4 5 6 7
n
(b) Distribution versus n.
Figure 3. Distribution of lengths of AC-trivialization paths
learned by greedy search as a function of maximum increase in
presentation length (left) and n (right).
Reinforcement learning algorithms, particularly policy gradient algorithms, present
a promising alternative that avoids these downsides. These algorithms are memory
efficient and can be trained in a highly distributed manner, which we will focus on
in the next section.
3.5. Proof of Theorem 1. As mentioned in Section 1, one nice byproduct of our
analysis is that the shortest mysterious AC presentation, namely AK(3), is stably
AC-trivial. The goal of this part is to present a proof of this statement.
First, in order to make this part of the paper self-contained, let us remind the
reader that the term “stable” (a.k.a. “weak”) refers to one of many variants of the
Andrews–Curtis conjecture, see e.g. [MMS02; MSZ16; Bag21], where in addition to
the usual AC-moves one is allowed to use two more transformations:
(AC4) Include a new generator and a trivial relator, i.e. replace _x1, . . ., xn_
_⟨_ _|_
_r1, . . ., rn_ by _x1, . . ., xn, xn+1_ _r1, . . ., rn, xn+1_ .
_⟩_ _⟨_ _|_ _⟩_
(AC5) Remove a trivial relator and the corresponding generator, i.e. the inverse of
(AC4).
**Definition 2. If two balanced presentations of the trivial group are related by a**
sequence of AC-transformations (AC1) through (AC5), we say that they are stably
_AC-equivalent._
The stable Andrews–Curtis conjecture states that any balanced presentation is
stably AC-equivalent to the trivial presentation. To the best of our knowledge, prior
to this work, the shortest potential counterexample to the standard Andrews–Curtis
conjecture, AK(3), was also a potential counterexample to the stable Andrews–
Curtis conjecture. Our proof that AK(3) is stably AC-trivial builds on the following
result.
**Theorem (Myasnikov, Myasnikov, and Shpilrain, [MMS02]). Using the notation**
[a, b] = aba[−][1]b[−][1] _and [a, b, c] = [[a, b], c], any presentation of the following form is a_
_presentation of the trivial group:_
_⟨x, y, z | x = z · [y[−][1], x[−][1], z], y = x · [y[−][1], x[−][1], z[−][1]] · [z[−][1], x], w⟩,_
-----
12 A. SHEHPER ET AL.
_where w is a word in x, y, and z whose exponent sum on x, y, and z equals ±1._
_Moreover, all such presentations are stably AC-trivial._
These presentations are obtained by applying Reidemeister moves to the knot
diagram of the unknot and using the fact that Reidemeister moves applied to a knot
diagram give stably AC-equivalent Wirtinger presentations of the knot group, cf.
[Wad94].
For w = x[−][1]yz, one of the relators eliminates the generator z, resulting in the
following length 25 presentation with two generators:
_⟨x, y | x[−][1]y[−][1]xy[−][1]x[−][1]yxy[−][2]xyx[−][1]y, y[−][1]x[−][1]y[2]x[−][1]y[−][1]xyxy[−][2]x⟩._
We discovered a sequence of AC-transformations (AC1)–(AC5) that relates this
presentation to AK(3). This also makes AK(3) the shortest stably AC-trivial
presentation that is not yet known to be AC-trivial. It is plausible that by varying
_w one can show that other presentations of the Akbulut–Kirby series (or the Miller–_
Schupp series) are also stably AC-trivial. We leave this question for future work.
Specifically, using search algorithms described earlier in this section we placed a
cutoff of a maximum of 1 million nodes to visit for each of our search algorithms
and allowed the length of each relator to increase up to 15. The greedy search
found a path connecting this presentation to AK(3), while breadth-first search
could only reduce the presentation’s length to 14. We repeated the search process
with breadth-first search with a cutoff of 5 million nodes. It failed to reduce the
presentation length any further.
The sequence of moves that connects the length-25 presentation to AK(3) can be
conveniently expressed in terms of the following 12 transformations:
_h1 = r2 →_ _r2r1,_ _h5 = r2 →_ _x[−][1]r2x,_ _h9 = r2 →_ _xr2x[−][1],_
_h2 = r1 →_ _r1r2[−][1][,]_ _h6 = r1 →_ _y[−][1]r1y,_ _h10 = r1 →_ _yr1y[−][1],_
_h3 = r2 →_ _r2r1[−][1][,]_ _h7 = r2 →_ _y[−][1]r2y,_ _h11 = r2 →_ _yr2y[−][1],_
_h4 = r1 →_ _r1r2,_ _h8 = r1 →_ _xr1x[−][1],_ _h12 = r1 →_ _x[−][1]r1x,_
among which a careful reader can recognize moves (AC[′]1) and (AC[′]2) introduced in
Section 2. Expressed in terms of the moves hi, the desired sequence has length 53
and looks as follows:
_h9_ _h7_ _h4_ _h8_ _h11_ _h5_ _h11_ _h9_ _h3_ _h10_ _h12_ _h7_ _h7_ _h9_ _h11_ _h5_ _h3_ _h5_
_·_ _·_ _·_ _·_ _·_ _·_ _·_ _·_ _·_ _·_ _·_ _·_ _·_ _·_ _·_ _·_ _·_ _·_
_h4_ _h3_ _h12_ _h5_ _h7_ _h7_ _h1_ _h9_ _h11_ _h8_ _h3_ _h5_ _h10_ _h2_ _h6_ _h12_ _h9_ _h7_
_·_ _·_ _·_ _·_ _·_ _·_ _·_ _·_ _·_ _·_ _·_ _·_ _·_ _·_ _·_ _·_ _·_ _·_
_h5_ _h11_ _h10_ _h3_ _h8_ _h11_ _h9_ _h2_ _h10_ _h12_ _h5_ _h7_ _h9_ _h11_ _h1_ _h9_ _h8._
_·_ _·_ _·_ _·_ _·_ _·_ _·_ _·_ _·_ _·_ _·_ _·_ _·_ _·_ _·_ _·_
This sequence should be read from left to right; first apply h9, then h7, and so
forth. This follows the standard convention of many programming languages, which
iterate over lists from left to right by default. The length of the presentation did
not exceed 25 during its path to AK(3). We do not know if this is the shortest path
between the two presentations.
4. Reinforcement Learning
This section is organized as follows: in Subsection 4.1, we discuss how the problem
underlying the Andrews–Curtis conjecture can be formulated as a Markov Decision
-----
WHAT MAKES MATH PROBLEMS HARD FOR RL 13
Process. In Subsection 4.2, we discuss details of a specific reinforcement learning
algorithm, called the Proximal Policy Optimization algorithm, which we used to
find AC trivializations of balanced presentations. Finally, in Subsection 4.3, we
discuss the results of our work, comparing the performance of PPO with that of the
classical search algorithms studied in the previous section.
4.1. Markov Decision Process. A Markov Decision Process (MDP) is defined as a
5-tuple (S, A, R, P, ρ). Here, S represents the space of states, while A denotes the set
of actions, where each action a ∈ _A is a function mapping from one state to another,_
i.e., a : S → _S. The function R_ : S × A × S → R is the reward function, which
assigns a real-valued reward based on the transition from one state to another via a
specific action. The transition probability function, denoted by P : S × A →P(S),
provides the probability distribution over the possible next states given a current
state and action. Lastly, ρ represents the initial probability distribution of states,
describing the likelihood of the system starting in each state.
The schematic picture of how these objects interact with each other is as follows.
We start with a state s0 sampled from the distribution ρ and take an action a0. This
results in a state s1 with probability P (s1 | s0, a0). The transition gets a “reward”
_r0 = R(s0, a0, s1) which quantifies the effectiveness of the action in contributing_
toward achieving an ultimate goal. From state s1, we repeat this process, obtaining
a trajectory of states
_τ = (s0, a0, s1, a1, . . . ) ._
The goal of this process is to maximize the cumulative return,
_γ[t]R(st, at, st+1)._
_t=0_
X
_R(τ_ ) =
Here, T is the length of the trajectory, known as the “horizon length” and γ ∈ (0, 1)
is the “discount factor” that assigns smaller weights to the reward values obtained
farther in the future.
For a given problem at hand, we may not a priori know the actions _at_ and
_{_ _}_
states _st+1_ that maximize the return. Deep reinforcement learning presents a
_{_ _}_
solution to this problem: we train a neural network that learns a map from states
to actions with the objective of maximizing the cumulative return. More precisely,
we learn a map called the “policy” function π : S →P(A) that assigns to each state
a probability distribution over actions. At time step t, an action at _π(_ _st)_
_∼_ _· |_
is sampled that gives the next state st+1.[5] In the next subsection we discuss the
specific algorithm and the objective function that we used in our study.
4.2. Proximal Policy Optimization. The goal of our optimization process is to
find a policy that maximizes the cumulative return. The most naive way to achieve
this goal is through an algorithm known as “vanilla policy gradient.” We perform
gradient updates guided by the expected return J(πθ) ≡ Eτ _∼πθ_ _R(τ_ ), where the
5In general, we need to specify the probability transition function P (· | st, at) from which the
next state would be sampled. In this paper, we take this probability distribution to be a delta
function centered at a single state, which we write as at(st).
-----
14 A. SHEHPER ET AL.
expectation is over a set of trajectories consisting of states and actions sampled
according to our current policy,
_θk+1 = θk + ∇θJ(πθ)._
It turns out that this update depends on the gradient of the logarithm of the
policy function itself and an “advantage function” A[π](s, a) which quantifies the
relative benefit of taking action a in state s under policy π [Ach18].[6] Explicitly,
_T_
_θJ(πθ) = Eτ_ _πθ_ _θ log πθ(at_ _st)A[π][θ]_ (st, at) _._
_∇_ _∼_ "t=0 _∇_ _|_ #
X
Thus, vanilla policy gradient algorithm amounts to optimizing the objective function
_T_
_L[P G]_ = Eτ _∼πθ_ "t=0 log πθ(at | st)A[π][θ] (st, at)# _._
X
Although it has the advantage of being simple, this algorithm has the drawback of
making excessively large updates, which can cause the updated policy to deviate
significantly from the current policy.
Proximal Policy Optimization (PPO) algorithms seek to make these updates
more robust by limiting the extent to which the policy π can change in a single
update [Sch+17]. PPO implements this through an objective function that includes
a clipped probability ratio between the new policy πθ and the old policy πold, thus
constraining the updates within a predefined range.
_L[CLIP]_ (θ) = Et [min(rt(θ)At, clip(rt(θ), 1 − _ϵ, 1 + ϵ)At)]_
Here rt(θ) = _ππoldθ((aatt|sstt))_ [represents the probability ratio, and][ ϵ][ is a small positive]
_|_
constant (commonly set around 0.1 or 0.2) that controls the clipping value. The
raw and the clipped ratio ensure that excessively large policy updates are curtailed,
making the optimization process more stable.
The advantage function is estimated during the optimization process using the
Generalized Advantage Estimation method of [Sch+18]. This method requires an
estimation of the on-policy value function (c.f. footnote 6). In PPO, the actor-critic
framework is used for this purpose. We train two neural networks in parallel, where
the “actor” learns the policy πθ and the “critic” learns the value function.[7] The
value loss L[V] is the mean squared error between the values for a state as estimated
by the critic network before and after a gradient update. Lastly, PPO adds entropy
_S of the action-space distribution to the full loss function to prevent premature_
convergence to a suboptimal policy. The full loss function is as follows,
_L = Et_ _L[CLIP]_ (θ) − _c1L[V]_ (ϕ) + c2S(πθ)(st)
where c1 and c2 are tunable hyperparameters.
6Mathematically, the advantage function is the difference between “on-policy action-value
function” Q[π](s, a) = Eτ _∼πθ_ [R(τ ) | s0 = s, a0 = a] and the “on-policy value function”, V _[π](s) =_
Eτ _∼πθ_ [R(τ ) | s0 = s]. These functions give an estimate of the cumulative return given that the
current state-action pair is (s, a) in the case of Q[π] and the current state is s in the case of V _[π]._
7Sometimes, these neural networks are made to share their parameters, which helps in stability
during training. We experimented with shared as well as unshared parameters and observed better
performance by keeping the parameters independent.
-----
WHAT MAKES MATH PROBLEMS HARD FOR RL 15
In policy gradient algorithms such as PPO, we alternate between collecting
data through the current policy in the “rollout phase” and updating the policy
weights through gradient descent in the “optimization phase”. There are various
hyperparameters associated to these phases, which we briefly describe now. The
rollout phase involves generating multiple trajectories of the underlying Markov
Decision Process in parallel using the current policy function. The hyperparameters
of this phase include the number of parallel actors N, the rollout length T _[′], i.e._
the number of steps each actor takes, the discount factor γ (c.f. Subsection 4.1)
and a bootstrapping hyperparameter of Generalized Advantage Estimation λGAE.
The dataset of N × T _[′]_ examples collected in this phase is then used to update the
policy. In each epoch, the dataset is shuffled and split into mini-batches before
performing gradient descent. In addition to the optimizer hyperparameters (such
as learning rate schedule), the number of epochs and the size of mini-batches are
hyperparameters of the optimization process. Further details of hyperparameters
are summarized in Appendix A.
4.3. Application to the Andrews–Curtis Conjecture. The set of all balanced
presentations of the trivial group with a fixed number of generators and the set of
AC-transformations play the roles of sets S and A, respectively. (Recall notations
introduced in Subsection 4.1). Once we choose a reward function R and an initial
state distribution ρ, we may use the Proximal Policy Optimization algorithm to
learn the policy function π. We tested a few different reward functions in our
experiments, observing that the following candidate led to the best performance
and stability in training.
min(10, length(st+1)) if length(st+1) > 2,
_−_
1000 otherwise.
_R(st, at, st+1) =_
Here, length(st+1) is the length of the presentation at timestep t + 1. The reward
function assigns min(10, length(st+1)) to a non-terminal state and 1000 to a
_−_
terminal state. We found that clipping the reward to −10 led to less variance in
gradients of the loss function with respect to weights.
We define the initial state distribution as a distribution over the presentations of
the Miller–Schupp series with n ≤ 7 and length(w) ≤ 7. Initially, each presentation
was selected exactly once in ascending order by n and length(w). Following this
initial sequence, we maintained an ongoing record of presentations that were either
solved or unsolved at any given time. During each rollout phase, a presentation was
randomly chosen from the set of solved or unsolved presentations with probabilities
of [1]4 [and][ 3]4 [, respectively. This method was designed to allow the policy network to]
refine its strategies on presentations that were already solved, potentially discovering
shorter sequences of AC-moves, while also tackling presentations that remained
unsolved.
We also need to choose a horizon length T over which the cumulative return
is calculated (c.f. Subsection 4.1). In reinforcement learning, training on tasks
with long horizons is usually harder than on those with short horizons. This is
because in long horizon tasks it is often more difficult to figure out which actions are
responsible for the eventual outcomes, a problem known as the credit assignment
problem. There is also more uncertainty in the returns (high variance), which
-----
16 A. SHEHPER ET AL.
can slow down convergence and destabilize the learning process. The size of the
exploration space also becomes a critical issue. With longer horizons, the agent
needs to explore a more diverse set of actions to learn about long-term consequences
without getting stuck in suboptimal regions. However, it does not mean that we
can not train well-performing agents for long-horizon tasks; rather, it indicates that
with longer horizons we may need significantly stronger computational power and
extended training periods.[8]
The horizon length T is an important hyperparameter in the context of our
problem as any presentation that requires a sequence of AC-moves of length greater
than the horizon length will necessarily remain unsolved by PPO. On the other hand,
choosing a horizon length that is too long can significantly slow down the training
process. Due to the limited amount of hardware available, we mostly experimented
with considerably smaller values for T, i.e. T = 200 and T = 400. With these
values, PPO could solve, respectively, 431 and 402 presentations out of the 1190
Miller–Schupp presentations of the initial state distribution.[9] In each case, these
presentations formed a subset of the presentations solved by the greedy search.
In the rest of this section, we will describe some observations made with the
value T = 200.[10] In Figure 1 above, the performance of this experiment is compared
with the results of the greedy search and the breadth-first search. While PPO
consistently outperformed BFS for all values of n and for all lengths of presentations,
it consistently underperformed compared to the greedy search.[11] In Figure 4a, we
plot the distribution of path lengths discovered by greedy search in the two cases
of presentations that could / could not be solved by PPO. It is clear that the
presentations PPO solved had, in general, smaller path lengths. In particular, all of
these had greedy search path lengths less than 200.
In Figure 4b, we give a scatterplot of the path lengths discovered by PPO and
greedy search for all of the presentations solved by PPO. We note that in many
cases, PPO found shorter paths compared to the greedy search. This is expected as
PPO learns to improve its strategy during the training, discovering shorter paths
for presentations it may have already solved. The scatterplot shows the shortest
paths discovered by PPO for each presentation. We also note that in many cases,
PPO found longer paths than greedy search. This shows that our specific run
exhibits a suboptimal policy. It could perhaps be improved by performing more
hyperparameter tuning on the training process.
8Another option for improving performance in long-horizon tasks is to provide good intermediate
rewards; unfortunately, this is rather hard in the present context of the AC conjecture.
9We also explored the value T = 2000. However, we found it much slower to train due to the
reasons described above. We could only solve 219 presentations of the initial state distribution
in this case. This training run had not converged, and we expect that with more computational
power and extended training periods, it will be worthwhile to experiment with larger values of T,
perhaps helping us solve even more presentations than greedy search.
10The complete list of hyperparameters for this experiment is summarized in Appendix A.
11The results in this section were obtained using relatively small neural networks for both the
actor and the critic networks, each consisting of two layers with 512 hidden units. It is natural to
assume that increasing the size of these networks could enable a PPO agent to outperform the
greedy search. However, such experiments were not performed due to computational limitations.
Future work could explore this direction to assess the impact of larger network architectures.
-----
WHAT MAKES MATH PROBLEMS HARD FOR RL 17
Comparison of GS path lengths for presentations solved / unsolved by PPO 200 Scatter plot between path lengths discovered by greedy search and PPO
350
175
300
150
250
125
200
100
Path length 150
75
100 PPO path lengths for PPO-solved
50
50
25
0
0
PPO-solved PPO-unsolved 0 25 50 75 100 125 150 175 200
Greedy search path lengths for PPO-solved
(a)
(b)
Figure 4. A comparison of path lengths discovered by the greedy
search and a PPO agent. The left panel shows the distribution of
lengths of AC trivialization paths discovered by the greedy search
in the cases solved / unsolved by PPO. The right panel shows the
scatter plot of path lengths discovered by PPO vs path lengths
discovered by the greedy search.
5. The Cure: New Algorithms
In previous sections we explained from a variety of different perspectives that the
Andrews–Curtis conjecture is a good example of a mathematical problem where the
length of a solution can be much greater than the length of the initial presentation,
in some cases with purely analytical lower bounds that are hyperexponential in
the size of the input. In particular, we saw that small increases in presentation
length under 20 quickly lead to solution lengths in the range of hundreds and higher,
quickly exceeding the number of moves in the longest game of chess.
If solving a mathematical problem required finding a path of length L, say with
_L = 10[6], an RL agent would be pretty much out of luck under circumstances of a_
typical hard search problem, where the number of successful paths is exponentially
suppressed by L. The good news is that in mathematics —and in many other
domains— such hard search problems never come in isolation. Rather, there is a
distribution of problems such that generic instances are “easy” and a small fraction
is “hard.” Learning this distribution for smaller values of L contains the crucial
information for solving new cases at the next increment of L.
5.1. Supermoves. In automated reasoning or search problems where the minimal
length solution has a theoretical lower bound that by far exceeds computational
capabilities, it is clear that direct approach with fixed size steps is not going to
succeed, unless the problem is easy and a large fraction of long paths meets the
desired criteria. In order to reach extraordinary path lengths, one must allow
progressively longer sequences of elementary moves to be added to the action space.
Although this general strategy seems unavoidable in problems like the AC conjecture,
it leads to many practical questions. For example, what should be the selection
-----
18 A. SHEHPER ET AL.
criteria for such “supermoves”? And, how often should they be added to the action
space?
In the context of the AC conjecture, a good example of such supermoves are the
“elementary M-transformations” [BM93; Bur+99]. These transformations trivialize
AK(2) in just two steps, even though this presentation is known to admit the shortest
AC trivialization path of length 14. A downside of elementary M-transformations,
though, is that they are infinite in number, which complicates their application in
classical search techniques.
In our study, we explored the idea of identifying AC supermoves by selecting some
frequently occurring subsequences of AC-moves in the paths discovered by Proximal
Policy Optimization (PPO). By extending the action space A of the Markov Decision
Process (MDP) with these subsequences and checking whether this enhanced action
space helps our agent discover shorter paths of trivialization, we learned a few useful
lessons:
_• First, it helps to augment the action space with subsequences of different_
kind that include frequently occurring compositions of elementary moves as
well as very rare ones.
_• Also, in the early stage it helps to introduce several supermoves at once._
_• And, at later stages it helps to allow removing actions from the action space,_
not only adding them.
Not following these empirical rules, e.g. introducing too few supermoves initially
or too many over the entire length of the training process, leads to considerable
reduction in performance of the RL agent. Even in the most optimal regimes that
we were able to find, the improvement of the performance due to supermoves was
rather modest, leading us to explore other alternatives.
5.2. The anatomy of success. While supermoves clearly need to be a part of the
solution in hard problems like the AC conjecture, much of the success depends on
the criteria for selecting them. Here, we advocate for a dynamic approach where the
network itself learns the criteria for selecting supermoves, in addition to the best
ways to implement them. One realization of this approach could be a multi-agent
model, where one network is learning to play the game and the other is learning the
rules for changing the action space (adding and removing supermoves). We hope
that future iterations of this strategy can lead to AI systems that can ‘learn how to
learn’ dynamically by making both algorithmic and architectural changes through
collecting the information about hard instances.[12]
Specifically, suppose N is one of the characteristics of either the algorithm or the
architecture that has non-trivial impact on performance. In practice, there can be
several such parameters, but for simplicity we explain the idea as if there is only
12Here, by self-improving AI systems we mean algorithms that have the ability to “interpolate”
between off-the-shelf algorithms such as A2C and TRPO, as well as a myriad of custom algorithms
that do not even have a name. Clearly, this level of technology is not presently available, and one
of the key points of this section is that developing such systems should be based on the hardest
instances the agent encounters.
-----
WHAT MAKES MATH PROBLEMS HARD FOR RL 19
Scaling of performance with environment interactions
4 GS-solved
All
3
2
100
9
8
7
6
5
4
Number of presentations solved by PPO
3
8 9 1M 2 3 4 5 6 7 8 9 10M 2 3 4 5 6 7 8 9100M
Number of environment interactions
Figure 5. Number of AC presentations solved by our PPO agent
as a function of the number of training steps. Here, GS-solved
refers to a subset of the Miller–Schupp dataset of Section 3 that
was solved by the greedy search algorithm.
one.[13] Then, from the practical standpoint, a natural notion of hardness is such
that hard instances are defined to be those which the model can solve at the current
setting of N and not with the lower value of the resource N . In addition, in search
problems we include the length of the path in the notion of hardness, i.e. select a
subset of the instances that the model could solve through especially long paths.
Note, by the very nature of the search problem we are interested in, there can not be
too many such hard instances at each step of increasing N, for otherwise the problem
would be easy, not hard. Collecting the information about the hardest instances
at each increment in N can be used to select supermoves, e.g. as subsequences of
the sequences of moves that solve the hard instances. Algorithm 3 provides one
particular realization of this idea.
In the context of the AC conjecture, examples of the metric N can be the horizon
length or the number of interactions with the environment. As Figure 5 illustrates,
increasing the number of environment interactions leads to a larger number of nontrivial presentations from the Miller–Schupp series being solved (i.e. AC-trivialized)
by our RL agent. Moreover, the length of the AC trivialization path also grows for
some of the solutions (but not all). Therefore, in order to implement the program
13Analyzing a multi-dimensional landscape is generally a good idea, as it can yield better
performance improvements, but the price to pay is that changes of different characteristics become
correlated, as illustrated in Figure 6.
-----
20 A. SHEHPER ET AL.
Scaling of environment interactions with horizon length at fixed performance levels
2000 Presentations Solved
16
32
64
96
128
1500 160
192
224
256
1000
500
Number of Environment Interactions
0
50 100 150 200 250
Horizon Length
Figure 6. To maintain consistent performance, increasing the
horizon length requires a roughly linear increase in the number of
training steps (environment interactions).
outlined above in the specific context of the AC conjecture, we can focus on the
longest AC trivialization paths that the model is able to find at each value of N .
(a) all (b) GS-solved
Figure 7. Path length distributions for AC presentations solved
by the RL agent at N = 8 × 10[7] from all and GS-solved datasets
are nearly identical. In both cases, hard instances are shown in red.
Collecting this information in the process of training can be used to introduce
(and remove) supermoves to the action space in a dynamic fashion. There can
be many different implementations of this idea that we plan to explore more fully
elsewhere. For example, one can consider selecting all or subset of the longest AC
-----
WHAT MAKES MATH PROBLEMS HARD FOR RL 21
**Algorithm 3 Adaptive AI Model Training and Path Discovery**
1: Input:
Family of AI models π(N ) with common state space S and action space A0
Initial setting N0 and ordered range {N1, N2, . . ., Nmax}
Number of epochs for training
Validation set V ⊂ _S_
Distinguished state s0 _S_
_∈_
Positive integer n
2: Output:
For each setting Ni: Set of pairs _v, P_ where v _V and P connects v to_
_{_ _}_ _∈_
_s0_
3: Initialize A(N1) _A0_
_←_
4: for each Ni in {N1, N2, . . ., Nmax} do
5: Train model π(Ni) on S for the given number of epochs
6: Evaluate π(Ni) on V to discover paths connecting V to s0 using A(Ni)
7: _V (Ni) ←{v ∈_ _V | v can be connected to s0 using A(Ni), but not by any_
_π(Nj) with j < i_
_}_
8: _W_ (Ni) ←{v ∈ _V (Ni) | the longest path connecting v to s0 using A0}_
9: **if i ≥** _n then_
10: Compare W (Ni−n+1) to W (Ni)
11: Adjust A(Ni+1) based on the comparison
12: **else**
13: _A(Ni+1)_ _A(Ni)_
_←_
14: **end if**
15: end for
trivialization paths that the model finds at each N . Out of those, in each case,
one can consider selecting the entire trivialization path or a subset, randomly or
non-randomly. Alternatively, one can compare the longest trivialization paths at
several (consecutive) values of N and choose subsequences of moves that are shared
by several long trivialization paths at different N .
For example, if N denotes the number of interactions with the environment, we
did a few preliminary experiments with the dataset of Section 4 and several different
seed values. To illustrate the effects of stochasticity, let us consider N = 8 × 10[7].
The agents with five different seed values were able to solve 354, 337, 330, 328,
and 323 presentations, respectively. And their average, 334.4, is shown in Figure 5.
Many of these AC presentations can be solved at the earlier stage, with N = 4 × 10[7]
or less. If in the definition of hard instances we require that they are solved by all
five agents, there are only 5 presentations total. On the other hand, if we require
that they are solved by any of the 5 agents, the number goes up to 36.
Moreover, not surprisingly, the 5 presentations solved by all 5 agents have
considerably shorter path lengths (47, 31, 22, 14, and 13) compared to path lengths
of the 36 presentations illustrated on the left panel of Figure 7 that go up to 200.
Both 5 presentations solved by all agents and 36 presentations solved by at least
one of the agents provide viable options for defining hard instances and, in turn,
selecting supermoves. However, they lead to qualitatively different results. For
-----
22 A. SHEHPER ET AL.
Figure 8. Types of AC-moves that appear in trivialization paths
of 5 presentations solved by all 5 agents at N = 8 × 10[7]. The
move #2 occurs disproportionately more frequently. There are 12
different types of basic AC-moves described in Section 3.
example, all 5 presentations solved by all 5 agents are solved at a smaller value of
_N when required to be solved by only one of the agents. More importantly, they_
have very different anatomy, illustrated in Figure 8 and in Figure 9, respectively. By
examining the longest trivialization paths of the 36 presentations solved by at least
one agent at N = 8 × 10[7], we often see long strings of moves #5 and #11, interlaced
with moves #3, #7, and #9. These are our top candidates for the supermoves to
be added at N = 8 × 10[7]. Note that moves #4 and #8 are least common in the
examples presented in both Figure 8 and Figure 9.
Figure 9. Types of AC-moves in the 5 longest trivialization paths
(of length 200, 190, 189, 184, and 179) found by at least one agent
at N = 8 × 10[7]. The most frequent moves are #5, #7, #9, and
#11. There are 12 different types of basic AC-moves described in
Section 3.
As in other parts of this paper, we performed the analysis on two datasets of
sizes 1190 and 533 that, respectively, contain all members of the Miller–Schupp
family with n ≤ 7 ∧ length(w) ≤ 7 and only those solved by the greedy search. The
results are qualitatively similar, as we already saw in Figure 7 that illustrates length
distributions of the successful AC paths in the two cases. Similarly, a closer look
at the anatomy of the successful paths —successful for the RL agent— reveals no
qualitative differences between the two datasets and, importantly, consistency of
our notion of hardness based on the path length. The largest level of stochasticity
that one may expect perhaps can be illustrated by an example of the presentation
_⟨x, y | x[−][1]y[2]xy[−][3]_ = 1, x[−][2]y[−][1]xy[−][4] = 1⟩
that an RL agent was able to solve at N = 4 × 10[7] with 62 moves in one case and at
_N = 8_ _×_ 10[7] with 200 moves in the other case. Despite considerable variance, in both
cases successful AC paths are dominated by the move #11, interlaced with moves
-----
WHAT MAKES MATH PROBLEMS HARD FOR RL 23
#3, #5, #7, and #9 according to patterns described above (cf. Figure 10). This
can serve as evidence for robustness of the supermove selection process proposed
here, and provides further support to Algorithm 3 and its variations.
Figure 10. An example illustrating anatomy of successful AC
paths found by an RL agent in different runs working on different
datasets, both of which contain the same hard instance. Even
though the total number of moves can vary, e.g. in this example we
had to rescale one of the distributions by a factor of 3, curiously,
the proportion of moves and combinations used are quite similar.
In both cases, RL agents used the same opening move.
In the following sections we explore the anatomy of successful AC paths further
using the tools of unsupervised learning and topological data analysis. These tools
offer a complementary perspective on hard-to-find AC paths and ways to find them.
6. Isolated components and neighborhood sizes
6.1. Isolated components. In Subsection 3.2, we explored a greedy approach to
finding AC-trivializations of a presentation π. Specifically, the goal was to construct
a sequence of presentations (π0, . . ., π), where π0 is the trivial presentation, such
that each consecutive pair in the sequence is related by an AC-move. Furthermore,
at each step k, the presentation πk was chosen to have the shortest possible length
among all presentations connected to πk+1 via an AC-move. In general, the length
of a presentation in an AC-trivialization tends to exceed the length of the original
presentation being trivialized. The minimum increase in length across all possible
AC-trivializations serves as an invariant of the presentation. We will explore this
invariant using concepts from persistent homology.
6.1.1. Formalization. A based graph is a pair (Γ, v0) consisting of a graph Γ and a
preferred vertex v0 in it. We will often drop v0 from the notation. A based subgraph
Γn of Γ, written Γn ≤ Γ, is a subgraph Γn of Γ with the same preferred vertex. We
say that Γn is full in Γ if for any two vertices in Γn joined by an edge in Γ, the edge
is also in Γn. A filtration of a based graph Γ is a collection
Γ0 Γ1 Γ2
_≤_ _≤_ _≤· · ·_
of based subgraphs of Γ for which each vertex and edge of Γ is in Γn for some n. We
refer to Γ as a filtered based graph. If each Γn is full in Γ we refer to the filtration as
-----
24 A. SHEHPER ET AL.
full and notice that full filtrations are equivalent to N valued functions from the set
of vertices of Γ sending v0 to 0.
Let Γ[AC(k)] be the graph whose vertices are k-balanced presentations, based at
the trivial presentation, having an edge between two vertices if there is an AC-move
between them. Additionally, Γ[AC(k)] is equipped with a full filtration obtained from
the function sending a vertex to the length of its presentation minus k.
Given a filtered based graph (Γ, v0). The filtration value Filt(v) of a vertex v is
the smallest n ∈ N such that v is a vertex in Γn. Similarly, its connectivity value
Conn(v) is the smallest n ∈ N such that v and v0 can be joined by a path in Γn or
is set to ∞ if such path does not exist in Γ.
The isolation value of a vertex v in a filtered based graph is defined as
Isol(v) = Conn(v) − Filt(v),
a number in N ∪{∞}. A vertex is said to be isolated if its isolation value is positive.
We introduce an equivalence relation on isolated vertices. Two belong to the same
_isolated component if they have the same connectivity value, say n, and they can be_
joined by a path in Γn−1. The isolation value of a component is the maximum of
the isolation values of its elements.
We can interpret these invariants using the framework of topological data analysis,
see for example [CV22]. Specifically, the set of isolated components of a based
filtered graph Γ corresponds to the multiset of bars in the barcode of its reduced
persistent 0-homology. Additionally, the isolation value of an isolated component
corresponds to the length of its associated bar.
6.1.2. Experimental results. Let Γ[ℓ] be the full subgraph of Γ[AC(2)], with the induced
filtration, consisting of all presentations with a connectivity value less than or equal
to ℓ. Explicitly, Γ[ℓ] includes all vertices that can be connected to the trivial vertex
via paths containing only presentations of length at most ℓ.
We will denote by v(ℓ) and e(ℓ) the number of vertices and edges of Γ[ℓ]. Let us
denote by ic(ℓ)k the number of isolated components with isolation value k. Figure 11
summarize our results for the classic AC-moves whereas Figure 12 does so for their
prime version.[14]
6.2. Neighborhoods. Let us return to our data set of 1190 presentations in the
Miller–Schupp series for n ≤ 7 and length(w) ≤ 7. Using the methods described
in Subsection 4.2, we trained a PPO agent that successfully solved 417 of these
presentations. We will refer to the set of these 417 presentations as PPO-solved
and the remaining 773 presentations as PPO-unsolved. Our goal is to analyze the
relationship between these labels and the sizes of their respective AC neighborhoods.
A presentation is considered to be in the k-step neighborhood of another if they can
be connected by applying at most k AC-moves.
14For this task we used giotto-TDA version 5.1 [Tau+21]. Specifically, its binding of the
```
SimplexTree data structure introduced in [BM14] and implemented in GUDHI [Mar+14].
```
-----
WHAT MAKES MATH PROBLEMS HARD FOR RL 25
|ℓ|v(ℓ)|e(ℓ)|ic(ℓ, 1)|ic(ℓ, 2)|ic(ℓ, 3)|
|---|---|---|---|---|---|
|3|36|72|0|0|0|
|4|100|248|0|0|0|
|5|388|1072|0|0|0|
|6|884|2376|0|0|0|
|7|3892|10775|0|0|0|
|8|9172|25675|0|0|0|
|9|37428|106513|0|0|0|
|10|84996|239733|0|0|0|
|11|350356|1002439|4|0|0|
|12|791140|2251375|16|0|0|
|13|3238052|9321629|72|4|0|
|14|7199908|20573343|144|4|0|
|15|29243812|84391763|508|52|8|
|16|64623652|185162236|1034|88|20|
_ℓ_ _v(ℓ)_ _e(ℓ)_ _ic(ℓ, 1)_ _ic(ℓ, 2)_ _ic(ℓ, 3)_
3 36 72 0 0 0
4 100 248 0 0 0
5 388 1072 0 0 0
6 884 2376 0 0 0
7 3892 10775 0 0 0
8 9172 25675 0 0 0
9 37428 106513 0 0 0
10 84996 239733 0 0 0
11 350356 1002439 4 0 0
12 791140 2251375 16 0 0
13 3238052 9321629 72 4 0
14 7199908 20573343 144 4 0
15 29243812 84391763 508 52 8
16 64623652 185162236 1034 88 20
Figure 11. Classical AC-moves
|ℓ|v(ℓ)|e(ℓ)|ic(ℓ, 1)|ic(ℓ, 2)|ic(ℓ, 3)|
|---|---|---|---|---|---|
|3|36|40|3|0|0|
|4|100|152|3|0|0|
|5|388|712|3|0|0|
|6|884|1528|3|0|0|
|7|3892|6984|3|0|0|
|8|9172|16728|3|0|0|
|9|37428|69752|3|0|0|
|10|84996|155752|3|0|0|
|11|350356|655928|19|0|0|
|12|791140|1467080|67|0|0|
|13|3238052|6107112|243|16|0|
|14|7199908|13414744|483|16|0|
|15|29243812|55306744|1819|136|32|
|16|64623652|120824232|3923|208|80|
_ℓ_ _v(ℓ)_ _e(ℓ)_ _ic(ℓ, 1)_ _ic(ℓ, 2)_ _ic(ℓ, 3)_
3 36 40 3 0 0
4 100 152 3 0 0
5 388 712 3 0 0
6 884 1528 3 0 0
7 3892 6984 3 0 0
8 9172 16728 3 0 0
9 37428 69752 3 0 0
10 84996 155752 3 0 0
11 350356 655928 19 0 0
12 791140 1467080 67 0 0
13 3238052 6107112 243 16 0
14 7199908 13414744 483 16 0
15 29243812 55306744 1819 136 32
16 64623652 120824232 3923 208 80
Figure 12. Prime AC-moves
6.2.1. Experimental results. There are 131 distinct neighborhood sizes in our data.
Their basic statistics are
**Min** **Max** **Mean** **Median**
72,964 89,872 89,532 89,859
A more detailed description of the frequency of values is presented in Figure 13.
The largest neighborhood size accounts for nearly a third of all considered
presentations. However, it represents only 2.4% of PPO-solved presentations, while
-----
A. SHEHPER ET AL.
Combined
Other
25.6%
89560
6.1%
89844
6.5%
3.2% 32.2%
89857 89872
7.6%
5.4%
89870
13.4%
89717
89859
Figure 13. Sizes of the 5-step neighborhood of all considered
presentations in the Miller–Schupp series. We group neighborhood
sizes whose representation is below 2.5%.
constituting almost half (48.3%) of the PPO-unsolved presentations. For more
details, please refer to Figure 14.
In contrast, using BFS, these proportions are 7.1% and 52.5%, respectively.
Another notable feature visible in Figure 14 is that just three neighborhood
sizes account for over three-quarters of all PPO-unsolved presentations. When
considering six neighborhood sizes, this proportion rises to 96.9%. In fact, only twelve
neighborhood sizes are present among PPO-unsolved presentations, whereas all 131
sizes appear among PPO-solved presentations. The most common neighborhood size
for PPO-solved presentations is 89,560, representing only 17.3% of them. Moreover,
54.2% of all PPO-solved presentations have a neighborhood size shared by less than
2.5% of other PPO-solved presentations.
As we observed, having a maximal neighborhood size provides significant insight
into whether a presentation is labeled as PPO-solved or PPO-unsolved. Additionally,
the minimum neighborhood size among PPO-unsolved presentations—89,573—is
also quite telling, as 54% of PPO-solved presentations have neighborhood sizes
smaller than this value. This percentage can be further improved by considering
that the neighborhood sizes of PPO-unsolved presentations are concentrated within
three specific bands. Please refer to Figure 15 for more details. We find that
64.3% of PPO-solved presentations fall outside the three bands [89, 575, 89, 575],
[89, 715, 89, 831], and [89, 844, 89, 872], which together contain over 99% of PPOunsolved presentations. By narrowing the last band to [89, 859, 89, 872], these
three bands now encompass the neighborhood sizes of over 90% of PPO-unsolved
presentations, while their complement includes 77.2% of PPO-solved presentations.
One might expect that enhancing the discriminatory power of n-neighborhoods
could be achieved by incorporating features beyond their size. We explored two
additional types of features, but surprisingly, they only marginally improved the
-----
PPO-unsolved presentations. We grouped sizes with representation
WHAT MAKES MATH PROBLEMS HARD FOR RL 27
Solved Unsolved
89717
89844
89857
Other
Other
89547 5.6% 8.2% 3.2%
5.8%
54.2% 89859
4.6%
20.3%
17.3%
89560
8.7%
2.6% 48.3%
4.3%
8.2% 3.1% 5.8% 89870
89689 89872
89831
89870
89844 89857
Figure 14. Pie charts for the neighborhood size of PPO-solved and
below 2.5%.
Histogram with 6 bins
300 solved
unsolved
250
200
150
Frequency
100
50
0
89600 89650 89700 89750 89800 89850
Number of 5-step neighboors
Histogram with 26 bins
solved
unsolved
200
150
Frequency 100
50
0
89600 89650 89700 89750 89800 89850
Number of 5-step neighboors
Figure 15. Histograms with 6 and 26 bins respectively of the
neighborhood sizes of the 417 PPO-solved and 773 PPO-unsolved
presentations.
accuracy of PPO-solved/unsolved predictions. The first type was based on node
centrality, while the second focused on spectral features of the neighborhood graphs.
The latter was particularly intriguing, given the emphasis on Markov processes
and the well-known relationship between random walks on graphs and the graph
Laplacian.
7. Language Modeling
In this section, we discuss a model for the “language” of balanced presentations.
Each presentation with two relators is a sequence made of six letters, also known
as “tokens” in the nomenclature of Natural Language Processing, i.e. x, y, x[−][1],
and y[−][1], and two “stop tokens”: one that separates two relators of a presentation
and another that marks the end of a presentation. Given this vocabulary V of six
tokens, we can ask what is the probability p(t1, . . ., tN ) for ti _V of the occurrence_
_∈_
-----
28 A. SHEHPER ET AL.
of a specific presentation in the space of all balanced presentations. Using the chain
rule of probability theory,
_p(t1_ _tN_ ) =
_· · ·_
_i=1_ _p(ti | t1 · · · ti−1)_
Y
Here p(tN | t1 · · · tN _−1), often called the N_ -gram probability distribution, is the
probability of a token tN following a sequence of tokens t1 _tN_ 1. To model
_· · ·_ _−_
the language of balanced presentations, we can alternatively estimate the N -gram
probability distributions for all N .
Over the last few years, Transformer models have shown great success in modeling
human-understandable languages to the extent that these models can create text
almost indistinguishable from that of a human expert. Specifically, the architecture
used for modeling language is the auto-regressive “decoder-only” Transformer, which
we review in detail in Subsection 7.1. In Subsection 7.2, we discuss the method
with which we generate the dataset required for training the model. Finally, in
Subsection 7.3, we share details of some insights we learned from this process.
7.1. Transformers: a review. Here, we give a short review of the architecture of
a decoder-only transformer. For more details, see [Vas+23; Elh+21; Dou23].
Given an input sequence t1, t2, . . ., tN, a decoder-only transformer predicts the
probability distribution p(t _t1, t2, . . ., tN_ ) over the set V of tokens of size nvocab.
_|_
The probability is computed by applying the softmax function to the logits T (t),
which are estimated by applying the following sequence of operations.[15] First,
assign to each token in the vocabulary a distinct label in the range 1, 2, . . ., nvocab;
re-writing the original sequence as a sequence of integers. We will label these integers
also as ti. Next, write the sequence in terms of “one-hot encoded vectors”, i.e. a
matrix t ∈ R[N] _[×][n][vocab]_ such that
_tij = δiti_
and embed the sequence in a dmodel-dimensional vector space,[16]
_x0 = (WP ⊗_ 1 + 1 ⊗ _WE)t_
Here, WP ∈ R[d][model][×][N] and WE ∈ R[d][model][×][n][vocab] are matrices of learnable parameters,
known as the “positional embedding” and “token embedding” matrices.
An L-layer transformer alternates between applying a “multi-head attention layer”
(h∈H _h) and an “MLP-layer” (m) L times. For i = 0, . . ., L −_ 1,
[P] _x2i+1 = x2i +_ _h(LN(x2i)),_
_hX∈H_
_x2i+2 = x2i+1 + m(LN(x2i+1))._
_n_
15The softmax function, Softmax: Rn (0, 1)n, is defined as Softmax(x)i = exi _/_ _e[x][j]_ .
_→_ _j=1_
16Here, t and all xj are two-dimensional tensors. Hence, it is appropriate to apply tensors ofP
linear transformations to them. Often in a transformer architecture, these operations are of the
form 1 ⊗· · · ; in these cases, we drop the identity transformation and simply write the operation as
_· · · . For example, 1 ⊗_ _WU_, 1 ⊗ _WI[m][,][ 1][ ⊗]_ _[W][ m]O_ [, etc. In this case, we will sometimes write][ W][U] [,][ W][ m]I [,]
_WO[m]_ [respectively, assuming it is clear from the context and the dimensionality of these matrices]
that they are tensored with identity transformations.
-----
WHAT MAKES MATH PROBLEMS HARD FOR RL 29
Each xj is an element of R[N][×][d][model], with the interpretation that its i-th row is
the embedding of the sequence t1, . . ., ti in the embedding space R[d][model] as learned
by the preceeding j + 1 operations. Finally, one applies an “unembedding layer”,
_WU ∈_ R[n][vocab][×][d][model], to convert the output of the final layer to an nvocab-dimensional
vector of logits that estimate the sought-after probability distribution.
_T_ (t) = WU _x2L−1,_
_p(t) = Softmax(T_ (t)).
The functions LN, m and h are defined as follows. LN is the LayerNorm operation
that normalizes the input of each layer to make the optimization process more
stable ([BKH16]):
LN(x) = (1 ⊗ diag(γ)) [(][x]var([ −] _[x]x[)])_ + 1 ⊗ _β ._
Here, x and var(x) are mean and variance of each row ofp _x, and γ, β ∈_ R[d][model] are
learnable parameters. The MLP-layer m is a non-linear operation,
_m(x) = WO[m]_ [max(][W][ m]I _[x,][ 0)]_
with learnable parameters WI[m] R[d][MLP][×][d][model], WO[m]
to set dMLP = 4dmodel. _∈_ _[∈]_ [R][d][model][×][d][MLP][. It is standard]
Finally, the multi-headed attention-layer _h_ _H_ _[h][ is a sum of][ n][heads][ “attention-]_
_∈_
head” operations h, where
_h(x) = (A[h](x) ⊗_ [P]WO[h][W][ h]V [)][x.]
Here, WV[h] _O_
_dthe attention matrixhead is the “attention-head dimension” that satisfies[∈]_ [R][d][head][×][d] A[model][h] is computed with the help of learnable matrices[,][ W][ h] _[∈]_ [R][d][model][×][d][head][ are matrices of learnable parameters;] dhead × nhead = d WmodelQ[h][, W]; andK[ h]
_[∈]_
R[d][head][×][d][model],
_x[T]_ (WQ[h][)][T][ W][ h]K[x]
_A[h](x) = Softmax[⋆]_ _√dhead_ ! _._
The attention-head is an N × N matrix, with the interpretation that A[h](x)ij is
the “attention” paid to the token tj in estimating p(ti+1 | t1, . . ., ti). Softmax[⋆] is a
variant of the Softmax function suitable for auto-regressive tasks: it sets the upper
triangular part of its input to zeros before applying the Softmax operation. That is,
future tokens, tk for k > i, play no role in the prediction of p(ti+1 | t1, . . ., ti).
We train the transformer model by minimizing the cross-entropy loss between the
distributions of predicted and correct labels for the next tokens in a sequence. The
parallelism offered by the processing of all tokens in a sequence at once is extremely
beneficial for efficient training of the model for the language modeling task.
In practice, the embedding matrix WE and the unembedding matrix WU are
often “tied” together, i.e. WE = WU[T] [[][PW17][;][ IKS17][]. The rows of][ W][E][ =][ W][ T]U [are]
interpreted as the embeddings of words/sentences, to which one may apply the
usual operations of a vector space [Ben+03; MYZ13]. For example, the cosine of
the angle between two embedding vectors, also known as the “cosine similarity”, is
often used to measure the similarity between two texts. Two semantically similar
-----
30 A. SHEHPER ET AL.
texts have higher cosine similarity between them, while semantically different texts
correspond to (almost) orthogonal vectors in the embedding space.
7.2. Training and Evaluation Datasets. We now discuss the training and
validation datasets used to train and evaluate our Transformer model. As our main
interest in this paper has been in the presentations of the Miller–Schupp series,
we generated a dataset of balanced presentations that are AC-equivalent to the
Miller–Schupp presentations. Specifically, we apply sequences of AC-moves to the
1190 presentations with n ≤ 7 and length(w) ≤ 7 discussed in Section 2, creating
a dataset of about 1.8 million presentations. Approximately 1 million of these
presentations are AC-equivalent to the presentations that remained unsolved by
greedy search (c.f. Section 3). Only a small amount (roughly 15 percent) of the
original Miller–Schupp presentations were part of this dataset.
The dataset is tokenized using six tokens: two stop tokens and one token each for
the two generators and their inverses. The tokenized dataset had about 2.17 × 10[8]
tokens. As our goal is to get insights into properties that distinguish GS-solved
and GS-unsolved presentations, we performed an exploratory data analysis of the
two subsets of data associated to these presentations. We plot the percentage of
appearance of each token for these subsets in Figure 16. The ratio of frequency of
_y[±][1]_ to the frequency of x[±][1] is higher in the GS-unsolved dataset. This is likely
because the GS-unsolved presentations have larger n, and larger n corresponds to a
higher number of occurrence of y[±][1] in the Miller–Schupp presentation. Interestingly,
this effect remains in the dataset even after applying thousands of AC-moves to the
original presentations.
We paid special attention to ensure that our dataset contains presentations of a
wide range of lengths so as not to bias our model towards learning trends specific to
any fixed length. To this end, we devised an algorithm (Algorithm 6 in Appendix C)
that creates an almost uniform distribution over the lengths of the presentations.
(See Figure 17.) We set aside 10% of our entire data for validation.
7.3. Results. A randomly initialized model with the initialization scheme given
in [Rad+19] has a cross entropy loss of ln(1/nvocab) 1.7918. With training, we
_−_ _≈_
could achieve a validation loss of 0.7337.[17] We used the untrained and the trained
model to get the embeddings of all 1190 presentations of the Miller–Schupp series
with n ≤ 7 and length(w) ≤ 7. We used t-SNE to project these embedding vectors
to a plane [MH08]. The plots are shown in grid in Figure 18.
Each row of Figure 18 corresponds to a fixed value of n. The left (resp. right)
column depicts t-SNE projections of embeddings obtained by an untrained (resp.
trained) model. t-SNE dependence on a distance measure: it learns to map vectors
that are closer together in the higher-dimensional space, with respect to this distance
measure, close together in the plane [MH08]. We used cosine simiarity between
embedding vectors as the distance measure for our plots. We note that the GS-solved
and GS-unsolved presentations seem to cluster much more in the plots in the right
column. This indicates that a trained Transformer model is able to distinguish
17We tuned the hyperparameters a little but it is quite likely that one can achieve a better
performing model with more hyperparameter tuning. Similarly, more training data will necessarily
help with the performance. We trained a Transformer model with hyperparameters given in
Appendix A.
-----
WHAT MAKES MATH PROBLEMS HARD FOR RL 31
Percentage of appearance of each token in the training data
GS-solved
GS-unsolved
25
20
15
10
Percentage of appearance
5
0 x y X Y, <>
Tokens
Figure 16. Percentage of appearance of each token in the two
subsets of the training dataset that are equivalent to GS-solved
and GS-unsolved presentations. To be clear, we computed the
percentages separately for each subset of the training data, i.e. the
heights of all blue (and orange) bars adds separately to 100.
between GS-solved and GS-unsolved presentations to a good extent, albeit not
perfectly.[18]
Note that the training dataset contained no information about the ease of
solvability of a presentation. It also did not contain many presentations of the
Miller–Schupp series itself. Instead, it contained presentations that are AC-equivalent
to the Miller–Schupp series presentations. Our observation that a Transformer model
trained on this dataset can distinguish between the GS-solved and GS-unsolved
presentations indicates that:
a) There likely exists an invariant at the level of the “language“ of the balanced
presentations that distinguishes GS-solved vs GS-unsolved presentations.
b) This invariant survives application of thousands of AC-moves we used to
generate the training examples in our dataset.
Appendix A. Hyperparameters
Here we discuss the hyperparameters used to train the Proximal Policy Optimization (PPO) and Transformer models of Section 4 and Section 7 respectively.
The hyperparameters of PPO are given in Table 1. These hyperparameters were
18Note also that t-SNE admits a hyperparameter known as “perplexity”, and the projections
learned by t-SNE depend on the hyperparameter [WVJ16]. Thus, in general, t-SNE plots must be
interpreted with care. The plots shown in Figure 18 were all made with the perplexity value of
30. We checked however that the clusters of GS-solved and GS-unsolved presentations continue to
exists at a broad range of perplexity values.
-----
32 A. SHEHPER ET AL.
Distribution of presentation lengths in Transformer data
9 GS-solved
GS-unsolved
8
7
6
5
4
3
2
Percentage of number of presentations
1
0
[0,15] [16,31] [32,47] [48,63] [64,79] [80,95][96,111][112,127][128,143][144,159][160,175][176,191][192,207][208,223][224,239][240,255]
Bins of lengths
Figure 17. Percentage of presentations in various ranges of lengths.
Percentages were computed independently for the two subsets of
the dataset, corresponding to presentations that are AC-equivalent
to GS-solved and GS-unsolved presentations. We used Algorithm 6
from Appendix C to ensure the almost-uniform distribution depicted
here.
defined in the main text in Subsection 4.2. Each of the two networks —the actor
and the critic— was a 2-layer feed forward neural network with 512 neurons and
_tanh non-linearlities. We used Adam optimizer for training._
The performance of PPO is known to be highly sensitive to various implementation
details in addition to the choice of hyperparameters [Hua+22b; Eng+20]. We used
the single-file implementation of PPO in CleanRL [Hua+22a], which has been wellbenchmarked against the results of the original paper [Sch+17]. Following [Eng+20],
we used advantage normalization and clipped value loss. If the KL divergence
between the old and the updated policy exceeded the target KL divergence in a
mini-batch, we skipped the remaining mini-batches in the optimization phase to
avoid a large jump in policy. We did not ablate all of the choices of hyperparameters
to investigate the importance of each choice.
The Transformer model studied in Section 7 is an 8-layer transformer model
with the embedding space dimension of 512 and 4 attention heads. The context
window of the Transformer had length 1024. We used a batch size of 12 and constant
learning rate of 6 × 10[−][5]. We trained for a total of 25000 iterations. We used
AdamW optimizer for training with hyperparameters β1 = 0.9 and β2 = 0.99. We
did not use any dropout during training.
-----
WHAT MAKES MATH PROBLEMS HARD FOR RL 33
← t-SNE Dimension 1 →
−5
−10 20 GS-solved
10
−15 GS-unsolved
−20 0
−25 −10 _n = 1_
−30 −20
−35 −30
−40 −40
45 50 55 60 15 20 25 30 35 40 45 50
40 20
10
20
0
0 −10 _n = 2_
−20
−20 −30
−40 −40
40 50 60 70 80 10 20 30 40
80
20
60
10
40 0
20 −10 _n = 3_
−20
0
−30
−20
−40
−40 −50
→ 0 20 40 60 80 0 10 20 30 40
60 20
40
0
20 _n = 4_
0 −20
−20
−40
−40
t-SNE Dimension 2 0 20 40 60 80 −20 −10 0 10 20 30 40
←
60 20
40
20 0
0 _n = 5_
−20 −20
−40
−40
−60
−80 −40 −20 0 20 40 −20 0 20 40
60
20
40
20 0
0 _n = 6_
−20 −20
−40
−40
−60
−100 −80 −60 −40 −20 0 −60 −40 −20 0 20 40
40
20
20
0 0 _n = 7_
−20 −20
−40
−40
−60
−100 −80 −60 −40 −20 0 −60 −60 −40 −20 0 20 40
Figure 18. t-SNE plots depicting embeddings of the Miller–Schupp presentations MS(n, w).
The left (right) column shows embeddings learned by an untrained (trained) transformer
d l E h d t l f T i d d l l t t th GS l d
-----
34 A. SHEHPER ET AL.
Hyperparameter Value
Horizon (T ) 200
Rollout Length (T _[′])_ 200
Number of parallel actors 28
Total Number of Rollouts _∼_ 2 × 10[5]
Maximum Learning Rate 1.0 × 10[−][4]
Minimum Learning Rate 0
Learning Rate Schedule Linear Decay
Number of epochs 1
Number of mini-batches 4
Optimization mini-batch size 1400
Discount (γ) 0.999
GAE parameter (λ) 0.95
Clipping parameter ϵ 0.2
Value Loss coefficient, c1 0.5
Entropy Loss coefficient, c2 0.01
Adam epsilon parameter 10[−][5]
Target KL divergence 0.01
Table 1. Table of Hyperparameters
Appendix B. Neighborhood constructions
B.1. Neighborhoods of the identity. For any ℓ _∈{3, . . ., 16}, we constructed a_
neighborhood of the identity using an algorithm based on BFS search (Algorithm 4).
This neighborhood contains all presentations that can be connected to the identity
via a path of AC-moves, where each presentation in the path has a length less than
or equal to ℓ, that is, the full based subgraph containing vertices with connectivity
value less than or equal to ℓ. We consider the relators of a presentations as a set
(meaning that the order of relators is not important; implemented as a tuple of
relators in lexicographic order)
B.2. Neighborhoods for MS series. We define the n-neighborhood of a balanced
presentation π as the set of all balanced presentations that can be obtained by
applying at most n AC-moves to π. We used Algorithm 5, a variation of BFS, to
generate 5-neighborhoods of presentations in the Miller–Schupp series. As before,
we disregard the order of the relators.
Appendix C. Language Modeling Dataset Generation
This appendix describes the method, Algorithm 6, used to generate the training
and evaluation datasets for the Transformer model, as referenced in Section 7.
Our aim was to create datasets featuring presentations of varying lengths. We
began with a presentation P0 from the Miller–Schupp series, where n ≤ 7 and
length(w) ≤ 7, setting a maximum relator length lmax = 128. Presentations were
generated in n = 128 phases, each phase allowing a maximum relator length
_li_ (l + i _linc, l + (i + 1)_ _linc). Here, l represents the longest relator length in_
_∼U_ _·_ _·_
_P0 and linc = (lmax −_ _l)/n is the incremental increase per phase. In each phase, we_
-----
WHAT MAKES MATH PROBLEMS HARD FOR RL 35
**Algorithm 4 Breadth-First Search Algorithm Bounded by Size**
1: Input: A balanced presentation π, maximal size of presentation n
2: Output: Set of enumerated presentations connected to the starting presentation
that are achievable without exceeding the size limit, and set of edges with
filtrations
3: Initialize a queue Q, set of visited nodes visited, and numerical map name that
will enumerate presentations
4: Mark π as visited, put it into queue Q, and assign it the number 0
5: while Q is not empty do
6: _u ←_ top of Q _▷_ Remove the front node of Q
7: **for every AC-move m do**
8: _child ←_ _m(u)_
9: **if child’s size ≤** _n and child is not visited then_
10: Put child in Q and mark it as visited
11: Assign child the next available number
12: **end if**
13: **if child’s size ≤** _n and u’s number is smaller than child’s number then_
14: Return edge (u, child) with proper filtration
15: **end if**
16: **end for**
17: end while
**Algorithm 5 Breadth-First Search Algorithm Bounded by Number of Steps**
1: Input: A balanced presentation π, positive integer n
2: Output: n-neighborhood of π
3: Initialize a queue Q, set of visited nodes visited, and numerical map dist that
represents the minimal number of AC-moves needed to transform π into a given
presentation
4: Mark π as visited, put it into queue Q, and set its distance to 0
5: while Q is not empty do
6: _u ←_ top of Q _▷_ Remove the front node of Q
7: **for every AC-move m do**
8: _child ←_ _m(u)_
9: **if dist[u] < n and child is not in visited then**
10: Put child in Q and mark it as visited
11: Set dist[child] = dist[u] + 1
12: **end if**
13: **end for**
14: end whilereturn set visited
selected a presentation P from the previous phase and applied N = 1000 AC[′] moves.
Any AC[′] move that exceeded the length li resulted in no change.
We repeated this for all 1190 presentations in the Miller–Schupp series, ultimately
producing approximately 1.8 million balanced presentations. The length distribution
of these presentations is detailed in Figure 17.
-----
36 A. SHEHPER ET AL.
**Algorithm 6 Transformer Dataset Generation**
1: Input:
_P0 – an initial presentation with l as the length of the longest relator_
_n – number of phases_
_m – number of presentations in each phase_
_N – number of AC[′]_ moves to apply in each phase
_lmax – upper bound on presentation lengths in the dataset_
2: Output:
Dataset (the final collection of presentations)
3: Dataset ←∅ _▷_ Initialize the dataset of all presentations
4: linc (lmax _l)/n_ _▷_ Increment for the maximum relator length per phase
5: for ← i = 0 to − n − 1 do _▷_ Loop over each phase
6: **for j = 1 to m do** _▷_ Generate m presentations for each phase
7: _li_ (l + i _linc, l + (i + 1)_ _linc)_ _▷_ Sample maximum relator length
8: _P ∼U ←_ (i = 1) ? · _P0 : Dataset[( ·_ _i −_ 1) · m + j − 1]
9: **for k = 1 to N do** _▷_ Apply N AC[′] moves with relator length li
10: _A ∼_ AC[′] Moves
11: _P ←_ _A · P_
12: **end for**
13: Dataset ← Dataset ∪{P _}_ _▷_ Add the presentation P to the Dataset
14: **end for**
15: end for
**Funding. The work of A.S. is supported by the US Department of Energy grant**
DE-SC0010008 to Rutgers University. The authors acknowledge the contributions
of Office of Advanced Research Computing (OARC) at Rutgers University for
providing access to the Amarel cluster and other computing resources. A.M.’s work
is supported by NSERC grants RES000678 and R7444A03. A.M. also gratefully
acknowledges the excellent working conditions provided by the Max Planck Institute
for Mathematics in Bonn. The work of P.K. and B.L. is supported by the SONATA
grant no. 2022/47/D/ST2/02058 funded by the Polish National Science Centre.
This research was carried out with the support of the Interdisciplinary Centre for
Mathematical and Computational Modelling at the University of Warsaw (ICM
UW). The work of S.G. is supported in part by a Simons Collaboration Grant on
New Structures in Low-Dimensional Topology, by the NSF grant DMS-2245099, and
by the U.S. Department of Energy, Office of Science, Office of High Energy Physics,
under Award No. DE-SC0011632.
References
[AC65] James J. Andrews and Morton L. Curtis. “Free groups and handlebodies”. Proceedings of the American Mathematical Society 16.2 (1965)
(cit. on p. 5).
[Ach18] [Joshua Achiam. Spinning Up in Deep Reinforcement Learning. Online](https://spinningup.openai.com/en/latest/)
resource. 2018 (cit. on p. 14).
-----
REFERENCES 37
[AK85] Selman Akbulut and Robion Kirby. “A potential smooth counterexample in dimension 4 to the Poincare conjecture, the Schoenflies conjecture,
and the Andrews–Curtis conjecture”. Topology 24.4 (1985) (cit. on
pp. 4, 6).
[Bag21] [Neda Bagherifard. Three-manifolds with boundary and the Andrews-](http://arxiv.org/abs/2109.13844)
_[Curtis transformations. 2021 (cit. on p. 11).](http://arxiv.org/abs/2109.13844)_
[Ben+03] [Yoshua Bengio et al. “A Neural Probabilistic Language Model”. J.](http://dl.acm.org/citation.cfm?id=944919.944966)
_Mach. Learn. Res. 3 (Mar. 2003) (cit. on p. 29)._
[BKH16] [Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E. Hinton. Layer](http://arxiv.org/abs/1607.06450)
_[Normalization. 2016 (cit. on p. 29).](http://arxiv.org/abs/1607.06450)_
[BM06] R. Sean Bowman and Stephen B. McCaul. “Fast searching for Andrews–
Curtis trivializations”. Experimental Mathematics 15.2 (2006) (cit. on
p. 6).
[BM14] Jean-Daniel Boissonnat and Cl´ement Maria. “The simplex tree: An
efficient data structure for general simplicial complexes”. Algorithmica
70 (2014) (cit. on p. 24).
[BM93] Robert G. Burns and Olga Macedonska. “Balanced Presentations of
the Trivial Group”. Bulletin of the London Mathematical Society 25
(1993) (cit. on p. 18).
[Bri15] Martin R. Bridson. “The complexity of balanced presentations and the
Andrews–Curtis conjecture”. arXiv preprint arXiv:1504.04187 (2015)
(cit. on p. 6).
[Bur+99] R. Burns et al. “Recalcitrance in groups”. Bulletin of the Australian
_Mathematical Society 60.2 (1999) (cit. on p. 18)._
[Cob+20] Karl Cobbe et al. “Leveraging procedural generation to benchmark reinforcement learning”. Proceedings of the 37th International Conference
_on Machine Learning. ICML’20. JMLR.org, 2020 (cit. on p. 3)._
[CV22] [Gunnar Carlsson and Mikael Vejdemo-Johansson. Topological data](https://doi.org/10.1017/9781108975704)
_[analysis with applications. Cambridge University Press, Cambridge,](https://doi.org/10.1017/9781108975704)_
2022 (cit. on p. 24).
[Dou23] [Michael R. Douglas. Large Language Models. 2023 (cit. on p. 28).](http://arxiv.org/abs/2307.05782)
[DU24] [Lennart Dabelow and Masahito Ueda. Symbolic Equation Solving via](https://arxiv.org/abs/2401.13447)
_[Reinforcement Learning. 2024 (cit. on p. 3).](https://arxiv.org/abs/2401.13447)_
[Elh+21] Nelson Elhage et al. “A Mathematical Framework for Transformer
Circuits”. Transformer Circuits Thread (2021). https://transformercircuits.pub/2021/framework/index.html (cit. on p. 28).
[Eng+20] [Logan Engstrom et al. Implementation Matters in Deep Policy Gradi-](http://arxiv.org/abs/2005.12729)
_[ents: A Case Study on PPO and TRPO. 2020 (cit. on p. 32).](http://arxiv.org/abs/2005.12729)_
[HR] George Havas and Colin Ramsay. Breadth-first search and Andrews–
_Curtis conjecture (cit. on p. 6)._
[Hua+22a] [Shengyi Huang et al. “CleanRL: High-quality Single-file Implementa-](http://jmlr.org/papers/v23/21-1342.html)
[tions of Deep Reinforcement Learning Algorithms”. Journal of Machine](http://jmlr.org/papers/v23/21-1342.html)
_Learning Research 23.274 (2022) (cit. on p. 32)._
[Hua+22b] [Shengyi Huang et al. “The 37 Implementation Details of Prox-](https://iclr-blog-track.github.io/2022/03/25/ppo-implementation-details/)
[imal Policy Optimization”. ICLR Blog Track. https://iclr-blog-](https://iclr-blog-track.github.io/2022/03/25/ppo-implementation-details/)
track.github.io/2022/03/25/ppo-implementation-details/. 2022 (cit. on
p. 32).
-----
38 REFERENCES
[IKS17] [Hakan Inan, Khashayar Khosravi, and Richard Socher. Tying Word](http://arxiv.org/abs/1611.01462)
_[Vectors and Word Classifiers: A Loss Framework for Language Model-](http://arxiv.org/abs/1611.01462)_
_[ing. 2017 (cit. on p. 29).](http://arxiv.org/abs/1611.01462)_
[KS16] [Krzysztof Krawiec and Jerry Swan. Distance Metric Ensemble Learning](http://arxiv.org/abs/1606.01412)
_[and the Andrews–Curtis Conjecture. 2016 (cit. on p. 6).](http://arxiv.org/abs/1606.01412)_
[Lis17] Boris Lishak. “Balanced finite presentations of the trivial group”.
_Journal of Topology and Analysis 9.02 (2017) (cit. on p. 6)._
[Mar+14] Cl´ement Maria et al. “The gudhi library: Simplicial complexes and
persistent homology”. Mathematical Software–ICMS 2014: 4th Inter_national Congress, Seoul, South Korea, August 5-9, 2014. Proceedings_
_4. Springer. 2014 (cit. on p. 24)._
[MH08] [Laurens van der Maaten and Geoffrey Hinton. “Visualizing Data using](http://jmlr.org/papers/v9/vandermaaten08a.html)
[t-SNE”. Journal of Machine Learning Research 9.86 (2008) (cit. on](http://jmlr.org/papers/v9/vandermaaten08a.html)
p. 30).
[Mia03] Alexei D. Miasnikov. “Genetic algorithms and the Andrews–Curtis
conjecture”. arXiv preprint math/0304306 (2003) (cit. on p. 6).
[MMS02] Alexei D. Myasnikov, Alexei G. Myasnikov, and Vladimir Shpilrain.
“On the Andrews–Curtis equivalence”. Contemporary Mathematics 296
(2002) (cit. on pp. 7, 11).
[MS99] Charles Miller and Paul Schupp. Some presentations of the trivial
_group. 10.1090/conm/250/03848. 1999 (cit. on p. 6)._
[MSZ16] [Jeffrey Meier, Trent Schirmer, and Alexander Zupan. “Classification](https://doi.org/10.1090/proc/13105)
[of trisections and the generalized property R conjecture”. Proc. Amer.](https://doi.org/10.1090/proc/13105)
_Math. Soc. 144.11 (2016) (cit. on p. 11)._
[MYZ13] [Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. “Linguistic Regu-](https://aclanthology.org/N13-1090)
[larities in Continuous Space Word Representations”. Proceedings of](https://aclanthology.org/N13-1090)
_the 2013 Conference of the North American Chapter of the Association_
_for Computational Linguistics: Human Language Technologies. Ed. by_
Lucy Vanderwende, Hal Daum´e III, and Katrin Kirchhoff. Atlanta,
Georgia: Association for Computational Linguistics, June 2013 (cit. on
p. 29).
[OAC23] [OpenAI, Anthropic, and Cohere. Needle In A Haystack — Pressure](https://github.com/gkamradt/LLMTest_NeedleInAHaystack)
_[Testing LLMs. Online resource. 2023 (cit. on p. 3).](https://github.com/gkamradt/LLMTest_NeedleInAHaystack)_
[PG23] [Gabriel Poesia and Noah D. Goodman. “Peano: Learning Formal](http://arxiv.org/abs/2211.15864)
[Mathematical Reasoning”. Phil. Trans. R. Soc. A.381.2251 (2023) (cit.](http://arxiv.org/abs/2211.15864)
on p. 3).
[PU19] Dmitry Panteleev and Alexander Ushakov. “Conjugacy search problem
and the Andrews–Curtis conjecture”. Groups Complexity Cryptology
11.1 (2019) (cit. on p. 6).
[PW17] [Ofir Press and Lior Wolf. Using the Output Embedding to Improve](http://arxiv.org/abs/1608.05859)
_[Language Models. 2017 (cit. on p. 29).](http://arxiv.org/abs/1608.05859)_
[Rad+19] [Alec Radford et al. “Language Models are Unsupervised Multitask](https://api.semanticscholar.org/CorpusID:160025533)
[Learners”. 2019 (cit. on p. 30).](https://api.semanticscholar.org/CorpusID:160025533)
[Sch+17] [John Schulman et al. Proximal Policy Optimization Algorithms. 2017](http://arxiv.org/abs/1707.06347)
(cit. on pp. 4, 14, 32).
[Sch+18] [John Schulman et al. High-Dimensional Continuous Control Using](http://arxiv.org/abs/1506.02438)
_[Generalized Advantage Estimation. 2018 (cit. on p. 14).](http://arxiv.org/abs/1506.02438)_
-----
REFERENCES 39
[Tau+21] [Guillaume Tauzin et al. “giotto-tda: A Topological Data Analysis](http://jmlr.org/papers/v22/20-325.html)
[Toolkit for Machine Learning and Data Exploration”. Journal of Ma-](http://jmlr.org/papers/v22/20-325.html)
_chine Learning Research 22.39 (2021) (cit. on p. 24)._
[Tri+24] Trieu H Trinh et al. “Solving olympiad geometry without human
demonstrations”. Nature 625.7995 (2024) (cit. on p. 3).
[Vas+23] [Ashish Vaswani et al. Attention Is All You Need](http://arxiv.org/abs/1706.03762) . 2023 (cit. on p. 28).
[Wad94] [Masaaki Wada. “Twisted Alexander polynomial for finitely presentable](https://www.sciencedirect.com/science/article/pii/0040938394900132)
[groups”. Topology 33.2 (1994) (cit. on p. 12).](https://www.sciencedirect.com/science/article/pii/0040938394900132)
[WVJ16] [Martin Wattenberg, Fernanda Viegas, and Ian Johnson. “How to Use](http://distill.pub/2016/misread-tsne)
[t-SNE Effectively”. Distill (2016) (cit. on p. 31).](http://distill.pub/2016/misread-tsne)
A.S. — NHETC, Department of Physics and Astronomy, Rutgers University, Piscataway, New Jersey 08854, USA
A.M. — Department of Mathematics, Western University, ON, Canada
B.L. — Institute of Mathematics, University of Warsaw, ul. Banacha 2, 02-097
Warsaw, Poland
A.G. — Polygon Zero
P.K. — Institute of Mathematics, University of Warsaw, ul. Banacha 2, 02-097
Warsaw, Poland
S.G. — Richard N. Merkin Center for Pure and Applied Mathematics, California
Institute of Technology, Pasadena, CA 91125, USA
-----
| [
"Sergei, Gukov",
"Ali, Shehper",
"Anibal M., Medina-Mardones",
"Bartłomiej, Lewandowski",
"Angus, Gruen",
"Piotr, Kucharski"
] | 2024-08-27T00:00:00 | null | false | 0 | 0 | null | https://arxiv.org/abs/2408.15332v1 | https://arxiv.org/abs/2408.15332 | https://www.semanticscholar.org/paper/6dd4f7609e064350bf7f83c01f169bb911778d49 |
When Not to Answer: Evaluating Prompts on GPT Models for Effective Abstention in Unanswerable Math Word Problems | Large language models (LLMs) are increasingly relied upon to solve complex mathematical word problems. However, being susceptible to hallucination, they may generate inaccurate results when presented with unanswerable questions, raising concerns about their potential harm. While GPT models are now widely used and trusted, the exploration of how they can effectively abstain from answering unanswerable math problems and the enhancement of their abstention capabilities has not been rigorously investigated. In this paper, we investigate whether GPTs can appropriately respond to unanswerable math word problems by applying prompts typically used in solvable mathematical scenarios. Our experiments utilize the Unanswerable Word Math Problem (UWMP) dataset, directly leveraging GPT model APIs. Evaluation metrics are introduced, which integrate three key factors: abstention, correctness and confidence. Our findings reveal critical gaps in GPT models and the hallucination it suffers from for unsolvable problems, highlighting the need for improved models capable of better managing uncertainty and complex reasoning in math word problem-solving contexts. | This paper investigates whether GPTs can appropriately respond to unanswerable math word problems by applying prompts typically used in solvable mathematical scenarios, and reveals critical gaps in GPT models and the hallucination it suffers from for unsolvable problems. | ## When Not to Answer: Evaluating Prompts on GPT Models for Effective Abstention in Unanswerable Math Word Problems
**Asir Saadat[1]** **, Tasmia Binte Sogir[1]** **, Md Taukir Azam Chowdhury[2]** **, Syem Aziz[1]**
1Islamic University of Technology
2University of California, Riverside
{asirsaadat, tasmia, syemaziz}@iut-dhaka.edu
[email protected]
**Abstract**
Large language models (LLMs) are increasingly relied upon to solve complex mathematical word problems. However, being susceptible to hallucination, they may generate inaccurate results when presented with unanswerable
questions, raising concerns about their potential harm. While GPT models are now widely
used and trusted, the exploration of how they
can effectively abstain from answering unanswerable math problems and the enhancement
of their abstention capabilities has not been
rigorously investigated. In this paper, we investigate whether GPTs can appropriately respond
to unanswerable math word problems by applying prompts typically used in solvable mathematical scenarios. Our experiments utilize the
Unanswerable Word Math Problem (UWMP)
dataset, directly leveraging GPT model APIs.
Evaluation metrics are introduced, which integrate three key factors: abstention, correctness
and confidence. Our findings reveal critical
gaps in GPT models and the hallucination it suffers from for unsolvable problems, highlighting
the need for improved models capable of better
managing uncertainty and complex reasoning
in math word problem-solving contexts.
Answerable Question
Jerry had some action figures on a **15.0**
shelf in his room. Later he added 2
more action figures to the shelf and
removed 7 of the old ones. If there
are 10 action figures on his shelf
now. How many action figures did
he have initially on the shelf?
Unanswerable Question
Jerry had some action figures on a
shelf in his room. Later he added 2
more action figures to the shelf and
removed 7 of the old ones. If there
are 10 action figures on his shelf
now. How many anime figures did
he have initially on the shelf? **15.0**
Figure 1: Answerable and unanswerable question given
to GPT-4. Red highlights the modifications made to
the original question, making it unanswerable and resulting in an incorrect response.
Lingo, 2023; Huang and Tan, 2023). Their ability
to understand, process, and respond to queries has
revolutionized problem-solving in everyday tasks,
especially in education and professional environments (Wardat et al., 2023; Xiao et al., 2023; Liu
et al., 2023a).
Despite these advancements, a critical issue persists: LLMs are prone to hallucination (Alkaissi
and McFarlane, 2023; Li, 2023; Ahmad et al.,
2023), often producing incorrect or misleading information when faced with unanswerable questions
(Deng et al., 2024a; Sun et al., 2024; Madhusudhan
et al., 2024a; Balepur et al., 2024). Studies have
demonstrated that they tend to generate responses
even in cases where no valid solution exists, often presenting them with unwarranted confidence
(Xiong et al., 2023; Tao et al., 2024). Such behavior
raises concerns as these hallucinations may result
in harmful or misleading conclusions (Pan et al.,
2023; Farquhar et al., 2024; Deng et al., 2024a).
**1** **Introduction**
Large Language Models (LLMs) have become an
integral part of various real-world applications,
ranging from content generation to code completion, and even medical advice (Brown, 2020; Bommasani et al., 2021; Wang et al., 2024a; Biswas,
2023). Among these applications, LLMs are increasingly employed to solve mathematical word
problems (Hendrycks et al., 2021; Austin et al.,
2021; Xu et al., 2024a; Wei et al., 2022), assisting users in both academic and practical scenarios.
The rise of LLMs, particularly models like GPT-3
and GPT-4, has democratized access to computational tools that were once the domain of experts
(Chen et al., 2024b; Hariri, 2023; Kalyan, 2023;
-----
Prompt
Repository
Repository
Model Prompt
Bryan took a look at his
books and magazines. If he Evaluation Metrics
has 9 books and 46 Prediction:460
magazines in each of his 10 Confidence:4
bookshelves.How many Confidence-Balance Metric
magazines does he have in Cautious Response Indicator
total? False Resistance
Answer and
Confidence Prediction
UMWP Dataset Inference Module
Figure 2: Architecture of the abstention evaluation – GPT Repository: Hosts multiple GPT models ready for
inference, UMWP Dataset: Consists of answerable and unanswerable questions, Inference Module: Performs
inference on the UMWP dataset using models from the model repository, Evaluation Metrics: ConfidenceWeighted Accuracy Metric, Cautious Response Indicator and Abstention Rate for evaluating the abstention of
ChatGPT.
While several studies have focused on improving
accuracy in solving complex math problems (Liu
et al., 2023b; Xu et al., 2024b; Liu et al., 2023c;
Ahn et al., 2024), little attention has been given
to understanding and improving abstention from
answering when no solution exists for the Math
Word Problem (MWP).
To address this issue, it is crucial to assess how
GPT, a widely used and trusted model, handles
abstention in unanswerable math word problems
and if prompting plays a crucial role in unlocking the full potential of these models (Chen et al.,
2024a; Chang et al., 2024; Cain, 2024). In our
research, we conducted experiments using a variety of prompts frequently used in mathematical
contexts to evaluate their effectiveness in guiding
GPT models. Our primary objective was to identify
the optimal combination of model and prompt that
would encourage the model to abstain from answering unanswerable questions, rather than attempting
to generate an incorrect or irrelevant response. For
evaluation, we developed an evaluation metric to
assess the model’s ability to appropriately abstain
from answering unanswerable questions, while correctly solving those that are answerable.
In summary, our major contributions are:
1. A comparative analysis highlighting how significant prompts can alter model outputs.
2. Analyze the tendency of models to answer
unanswerable questions and the generation of
hallucinations in detail.
3. Introduce metrics to evaluate model performance in terms of accuracy, abstention, and
confidence.
**2** **Related Work**
**2.1** **GPTs in Mathematical Reasoning**
Early work by Brown (2020) with GPT-3 revealed
that LLMs, trained on vast amounts of text data, can
successfully perform few-shot learning for a variety
of mathematical reasoning tasks. The application
of ChatGPT in mathematical reasoning has garnered significant attention in recent research. One
notable study by Long et al. (2024) explores the
potential of ChatGPT in generating pre-university
math questions. Similarly, Frieder et al. (2024)
evaluated the mathematical capabilities of GPT4, noting that it handles complex mathematical
reasoning without relying on external tools, providing answers in fields ranging from algebra to
calculus. Additionally, Shakarian et al. (2023)
evaluated ChatGPT’s performance on mathematical word problems from the DRAW-1K dataset.
These advancements show that these models are
not only solving word problems but also challenging domain-specific expert tools in mathematical
problem-solving.
**2.2** **Unanswerable Question Answering**
Madhusudhan et al. (2024b) explores the Abstention Ability (AA), which is their capacity to refrain
from answering when uncertain or when a question is unanswerable. The challenge of handling
-----
unanswerable questions has been a significant area
of research in the development of GPT models.
One notable study by Deng et al. (2024b) introduces a self-alignment method to enhance LLMs
ability to respond to unanswerable questions. Guo
et al. (2024) established the UNK-VQA dataset,
designed to evaluate how well multi-modal large
models can abstain from answering unanswerable
questions. The dataset includes deliberately perturbed images and questions to challenge the models. Lastly, Sun et al. (2024) introduced a novel
dataset called UMWP, which includes both answerable and unanswerable math word problems.
**2.3** **Influence of Prompting**
GPT models can be significantly influenced by the
type of prompting used. One notable approach
is Chain-of-Thought (CoT) prompting (Wei et al.,
2022), which encourages the model to generate
intermediate reasoning steps before arriving at a
final answer. Another effective technique is the
Role-Play (Kong et al., 2023), where the model is
instructed to adopt a specific persona or role. Zhou
et al. (2024) introduced self-verification to get better performance on GPT-4 on mathematical reasoning. Ma et al. (2024) introduced a strategic prompt
called Critical Calculation and Conclusion (CCC).
This template is designed to enhance the error detection and correction abilities when faced with
unreasonable math problems. Chen et al. (2022)
separates the reasoning process from computation
by having the language model generate a structured
program to represent the reasoning steps. The actual computation is then performed by an external
computer executing the generated program
**3** **Methodology**
**3.1** **Construction of Dataset**
We utilized the Unanswerable Math Word Problem (UMWP) (Sun et al., 2024) dataset, which
includes both answerable and unanswerable questions. From this dataset, we selected 1000 pairs
of questions. Each pair consists of an answerable
question and a corresponding variant that has been
perturbed to become unanswerable. This results
in a total of 2000 questions—half of which are
answerable and the other half unanswerable. The
unanswerable questions are categorized into five
distinct categories: (A) Missing Key Information,
**(B) Ambiguous Key Information, (C) Unrealis-**
**tic Conditions, (D) Unrelated Objects and (E)**
Figure 3: Sunburst Distribution of the first two words
of the UWMP dataset.
**Incomplete Questions. The dataset includes only**
the question and its corresponding answer. Fig. 3
illustrates the variety of questions.
To evaluate performance, we developed a
multiple-choice question (MCQ) system. For each
question, we generated four alternative answers
that are close to the correct one, along with a fifth
option: "I Don’t Know/NOTA of the above." The
system will prompt GPT models to identify the
correct answer from the given options.
**3.2** **Prompts for GPT**
The use of prompts has been extensively studied to
enhance performance and reliability. We did not
use any prompts that would require external tools
(Chen et al., 2022; Imani et al., 2023). We start with
a fundamental prompt, referred to as a Zero-shot
Prompt that notifies the model to answer with only
one of the five options along with a confidence
score, and then incorporate additional prompts
to improve performance to answer the question
demonstrated by Fig. 4. Chain-of-Thought (CoT)
prompting, as shown by Wei et al. (2022), improves
multi-step reasoning by guiding models through
step-by-step processes. Abstention techniques, explored by Madhusudhan et al. (2024a), allow models to withhold responses when uncertain, reducing errors. Role-Based prompting, as explored by
Kong et al. (2023); Anagnostidis and Bulian (2024),
involves assigning a Math expert role to the model,
thereby improving its contextual comprehension
of mathematical questions. Few-Shot prompting,
as described by Brown (2020), allows models to
generalize by providing two examples alongside
the actual question. Each example includes the
-----
|Col1|Col2|Answerable|Unanswerable|
|---|---|---|---|
|Answered|Correct|TP|FP|
||Incorrect|FP||
|Abstained (IDK/NOTA)||FN|TN|
Question Type
Answerable Unanswerable
Correct _TP_
Answered _FP_
Incorrect _FP_
Abstained (IDK/NOTA) _FN_ _TN_
Figure 5: Confusion Matrix illustrating the definition of
TP, FP, FN and TN for answerable and unanswerable
questions.
Where a(q) represents the generated answer for
question s, and t(q) denotes the ground truth answer for the same question. The term |Q| indicates
the total number of questions and I[·] is the indicator function.
**3.3.3** **Confidence-Balance Metric**
Our metric reflects the overall performance of the
models where the prediction is associated with the
confidence score.
**Zero Shot/Basic Prompt**
I am going to give you a file containing a bunch of math word problem in MCQ
format and you have to answer them all. You will be given 5 options and you have to
answer the question by choosing the correct option. Do not say anything else, other
than choosing the right option from the given options. Do not give any explanation
for your answer. If the question lacks sufficient information to be solved or if it is
ambiguous or unclear in phrasing, choose "I Don't Know/None of the above".
Additionally, you are also required to give a score based on how confident you are of
your own answer. The score should be in the range of 1 to 5 where 1 being 'Least
Confident' and 5 being 'Extremely Confident'
**Extreme Abstention**
**Chain of Thought** PAY ATTENTION: Any
Solve the problems wrong answer, I repeat
step by step. Then again, this is very critical,
choose the correct that you generate can
answer. have EXTREME negative
impacts.
**Few Shot**
Here are two sample
**Additional Prompt** **Role-based (Math** example questions with
**Expert)** answers: "question":
You are a math expert. "Adam has 10 apples.
You are extremely Jackie has 2 apples..How
knowledgeable and many more apples does
have excellent critical Adam have than Jackie?",
analysis ability. **"options": [ 9.46, 8.0,**
10.93, 4.43, "I Don't
Know/None of the above"
] "answer": 8.0....
Figure 4: Diverse Prompts for enhancing performance
that are additionally added to the basic prompt.
question, possible options, and the correct answer.
ΦCB = [1]
(I(δi = 1) · confi
_i=1_
X
_−I(δi = 0) · confi)_
In this context, N represents the total number
of instances, δi equals 1 if the prediction is correct (true positive or true negative) and 0 otherwise,
confi denotes the confidence score of the prediction
and I[·] is the indicator function to emphasize the
binary nature of δi. This metric rewards instances
where the model is both accurate and confident,
while penalizing cases where it provides incorrect
answers with high confidence. It effectively balances the model’s ability to assert correct answers
with its confidence levels, providing a comprehensive measure of performance.
**3.3** **Evaluation Metrics**
**3.3.1** **Answerable-Unanswerable Confusion**
**Matrix**
According to Madhusudhan et al. (2024a), a confusion matrix was created to demonstrate that for
Answerable MCQs, True Positives occur when the
LLM selects the correct option, while False Positives occur when an incorrect non-IDK option is
chosen. Abstentions on answerable questions lead
to False Negatives. Unanswerable MCQs are classified as the negative class. Correctly abstaining
on these questions results in True Negatives, while
failing to abstain leads to False Positives shown in
Fig. 5.
**3.3.2** **Accuracy**
**3.3.4** **Cautious Response Indicator**
We introduced a metric to assess the performance
of GPT models in handling unanswerable questions.
This metric is mathematically defined as:
Accuracy is the primary choice for model evaluation. It is defined as the proportion of correct predictions made by the model out of the total number
of predictions.
= _[T][N][ ·][ w]_
_C_ _UQ_
where TN denotes the count of true negatives, w
represents the confidence factor to emphasize the
importance of correctly identifying unanswerable
questions, and UQ indicates the total number of
I(a(q) = t(q))
_qX∈Q_
_Accuracy =_
_|Q|_
-----
**5** **Experimental Results**
**5.1** **Impact of Prompt Variations on Accuracy**
Our experimental results showed that these advanced prompting methods did not consistently
outperform the baseline zero-shot prompt in terms
of accuracy as it was shown in Table 1. In fact,
in several cases, the results were either similar to
or worse than the zero-shot prompt. Wang et al.
(2024b) highlighted the limitations of LLMs with
multiple problems, noting that "few-shot" prompting can actually hinder performance. As demonstrated in Fig. 1, while GPT-4 Turbo showed
marginally improved performance across all metrics, other models exhibited a slight decline. The
inclusion of two examples in the prompt did not
provide the expected benefit, indicating that fewshot prompting was not consistently helpful for the
models. When considering only accuracy, GPT-4o
emerges as the optimal model, particularly with the
zero-shot prompt. It achieved an impressive 97.7%
accuracy across both answerable and unanswerable questions, outperforming all other models and
prompting strategies. This result highlights GPT-4o
as the most effective solution among the evaluated
configurations.
**5.2** **Abstention in Cautious Response and**
**False Indicator**
Abstention did not appear to have a significant impact, despite the model being explicitly warned
about negative consequences in the prompt, as
highlighted by Madhusudhan et al. (2024a). The
expected improvements in metrics like Cautious
Response and False Resistance were not observed,
as the model did not respond cautiously when uncertain, contrary to our initial assumptions. Instead,
abstention led to results that were largely unremarkable. The accuracy remained comparable to
the baseline, and Confidence Balance showed minimal improvement, or in some instances, performed
worse than the zero-shot setup, as seen in Tab. 6.
**5.3** **Confidence Balance Variability**
While the overall accuracy did not improve significantly, we observed some fluctuations compared
to the zero-shot approach for CB (Confidence Balance). This metric integrates both confidence and
accuracy, providing insight into how each model
evaluates the answers it generates. Even though the
accuracy for few-shot setting was lower, CB score
of the the models improved compared to zero-shot
unanswerable questions presented to the dataset.
As abstention without hallucination is the key goal,
this allows the evaluation of correctly identifying
the unanswerable ones with confidence.
**3.3.5** **False Resistance**
Inspired by Madhusudhan et al. (2024a), we developed a weighted abstention rate defined as:
= _[F][N][ ·][ w]_
_FR_ _AQ_
where FN denotes the count of False Negative.
_AQ indicates the total number of answerable ques-_
tions. This metric illustrates the extent to which it
wrongly abstains, potentially not finding the actual
answer and opting for IDK/NOTA.
**4** **Experimental Setup**
**4.1** **Hardware and Implementation Details**
In our experiments, we employed an NVIDIA
GeForce GTX 1650 GPU with 4GB of VRAM
to assess the models. The models were primarily
accessed and integrated through the use of OpenAI
(OpenAI, 2023) APIs.
**4.2** **Evaluated Models**
To evaluate the performance of our approach, we
utilized a variety of large language models from the
GPT-4 family (OpenAI, 2024). These models offer
varying degrees of computational efficiency and
reasoning capabilities, allowing for a comprehensive assessment across different scenarios. GPT-4
is known for its state-of-the-art reasoning abilities
and broad generalization across a wide range of
tasks. GPT-4 Turbo, a more computationally efficient version of GPT-4, retains much of the original model’s accuracy while offering faster response
times. GPT-4o is a further optimized version that
is designed for ultra-fast inference with minimal
reduction in performance accuracy. Lastly, GPT**4oMini, a scaled-down version of GPT-4o, sacri-**
fices some of the model’s capacity in exchange
for lower computational cost. For all models, we
configured the temperature to 0 and the top_p
to 0.00001. We chose not to include GPT-3.5 in
our evaluations due to its noticeably inferior performance compared to the GPT-4 models. During
initial testing, the quality of inferences generated
by GPT-3.5 was consistently subpar, and its accuracy fell significantly short of the levels achieved
by any variant of GPT-4.
-----
|Model|Metrics|Zero Shot Few Shot Role Based Abstention CoT|
|---|---|---|
|GPT-4|Φ CB↑ C↑ FR↓ Accuracy↑|2.22 2.37 2.50 2.16 3.47 2.19 2.41 2.37 1.98 3.79 0.011 0.012 0.013 0.005 0.145 0.756 0.751 0.779 0.753 0.856|
|GPT-4o mini|Φ CB↑ C↑ FR↓ Accuracy↑|0.87 1.85 0.52 0.81 1.47 0.91 3.04 0.53 0.64 3.18 0.006 0.01 0.004 0.004 0.01 0.70 0.685 0.653 0.703 0.653|
|GPT-4 turbo|Φ CB↑ C↑ FR↓ Accuracy↑|3.12 3.88 3.32 2.57 3.66 3.01 4.02 3.26 2.69 3.81 0.018 0.05 0.044 0.017 0.15 0.833 0.888 0.844 0.757 0.866|
|GPT-4 o|Φ CB↑ C↑ FR↓ Accuracy↑|4.09 4.18 4.25 3.93 3.80 3.53 4.12 4.08 4.21 4.07 0.01 0.06 0.011 0.068 0.23 0.977 0.944 0.949 0.924 0.879|
Table 1: This table showcases multiple performances of the models under diverse prompts, quantified through the
Confidence-Weighted Accuracy Metric, Cautious Response Indicator, False Resistance and Accuracy. ↑ indicates
higher is better while ↓ indicates the opposite. Green indicates the best score for each metric across all of the of
the prompts.
phrases, leading to incorrect answers. Our stepby-step analysis during CoT prompting revealed
that the models sometimes fabricate questions that
were not originally present when breaking down
the problem to solve it. While GPT-4’s performance with CoT prompting surpasses that of other
prompts, it still struggles with generating incorrect
answers for unanswerable questions. Table 2 illustrates the model’s reasoning process for these
unanswerable questions, showcasing the step-bystep approach taken by the model across different
question categories in the dataset.
The Missing Key Info category, as the name
suggests, contains questions with missing key information. In the given question, since the number
of completed problems is not provided, it becomes
impossible to determine how many math problems
remain. The model incorrectly assumed that all
remaining problems are math-related, overlooking
the possibility of science problems. The Unreal**istic and Unrelated categories exhibited similar**
issues, where the model fails to comprehend the
changes or perturbations in the questions but attempts to answer them regardless. In Ambiguous
**Key Info, additional information introduced ambi-**
guity, yet the model fails to identify this ambiguity.
Finally, in Incomplete Ques, the model proceeded
to answer without a properly framed question. We
can draw several key conclusions from our observations. Remarkably, the model attempts to generate
100%
GPT-4
80%
GPT-4o mini 60%
40%
GPT-4 turbo 20%
0%
GPT-4o -20%
-40%
Few Shot Role Based AbstentionChain of Thought
Figure 6: Heatmap visualization of models performance
of different prompts on Confidence Balance with respect
to zero-shot prompt.
which is depicted in Fig. 6. The two examples
provided in the prompt helped guide the models toward generating responses with greater confidence
in the correct answers. This suggests that when
models are confident in their responses, they are
less prone to confusion or generating conflicting
answers. Similarly, for CoT prompting, breaking
down problems into smaller subproblems and analyzing them individually enhances the model’s
ability to provide confident and accurate answers.
**5.4** **Hallucination for unanswerable question**
As illustrated in Fig. 1, GPT models often exhibit hallucinations when they fail to fully comprehend the question or misinterpret certain words or
-----
|Category|Unanswerable Question|CoT Explanation|
|---|---|---|
|Missing Key Info|Paige had 43 math problems and 12 science problems for homework. If she finished 44 of the math and science problems at school, how many problems did she have to do for math homework?|Paige had 43 math problems and 12 science problems, making a total of 43 + 12 = 55 problems. If she finished 44 of the math and science problems at school, she has 55 - 44 = 11 problems left to do for math homework.|
|Ambiguous Key Info|For the school bake sale Bianca made more than 14 cupcakes. If she sold 6 of them and then made 17 more, how many cupcakes would she have?|Bianca made more than 14 cupcakes. She sold 6 of them and then made 17 more. So, the total number of cupcakes she would have is (14+17)-6.|
|Unrealistic|Jerry had some action figures on a shelf in his room. Later he added 2 more action figures to the shelf and removed 7 of the old ones. If there are 10 action figures on his shelf now.How many anime figures did he have initially on the shelf?|Jerry added 2 more action figures to the shelf and removed 7 of the old ones. If there are 10 action figures on his shelf now, to find the initial number of action figures, we add the number of action figures removed (7) to the final number of action figures (10) and subtract the number of action figures added (2).|
|Unrelated|Rachel had to complete 2 pages of reading homework and 4 pages of math homework. How many more pages of speaking homework than reading homework did she have?|Rachel had to complete 2 pages of reading homework and 4 pages of math homework. So, we subtract the number of pages of reading homework from the number of pages of math homework to find out how many more pages of speaking homework than reading homework she had. 4 (math homework) - 2 (reading homework) = 2|
|Incomplete Ques|The grasshopper and the frog had a jumping contest. The grasshopper jumped 31 inches and the frog jumped 35 inches.|The grasshopper jumped 31 inches and the frog jumped 35 inches. So, we add the distances jumped by the grasshopper and the frog to find out the total distance jumped. 31 (grasshopper’s jump) + 35 (frog’s jump) = 66|
Table 2: The table displays CoT analysis of GPT-4 for the unanswerable questions for each category. The sections
highlighted in red denote the elements that contribute to the question being unanswerable.
answers even when posed with unanswerable questions. What is surprising is that it often provides
the correct response from the set of answerable
questions. Specifically, for GPT-4 with Chain of
Thought (CoT), 77% of the answers given for unanswerable questions corresponded to the correct answers from answerable ones. This suggests that
the model tries to make sense of the question, despite its ambiguity or unanswerable nature. It either
disregards the uncertainty or reimagines the question logically based on the provided information.
This behavior indicates that the model might be
reconstructing or correcting the question to fit the
scenario and generate a plausible answer as seen
from Tab. 2 where it corrected itself to "action figures" instead of "anime figures" and proceeded to
analyze.
**5.5** **Analysis of Unanswerable Question**
**Categories**
Fig. 7 illustrates the impressive performance of
GPT-4o, which exhibits near-perfect results across
various question types, with the exception of CoT
prompts, where there is a significant performance
drop. Interestingly, while CoT negatively impacts
GPT-4o, it enhances the performance of all other
models, particularly GPT-4, which struggles with
alternative prompt types.
Notably, CoT tends to perform poorly with incomplete questions, as it attempts to address problems incrementally, failing to recognize their unanswerable nature. The data for GPT-4o mini reveals
that it ranks lowest among the models.
Conversely, the zero-shot approach shows a
commendable ability to identify when to select
NOTA/IDK responses, effectively indicating when
a question is unsolvable. Furthermore, the few-shot,
role-based (math expert) and abstention strategies,
yield results comparable to those of the zero-shot
model.
In summary, while CoT can be detrimental in
scenarios involving incomplete questions, it generally improves performance in other contexts by
aiding models in discerning when to refrain from
answering. Overall, GPT-4o stands out as the most
effective model, though GPT-4 turbo also demonstrates a similar proficiency in recognizing unanswerable questions under certain prompts.
-----
GPT-4 GPT-4o mini
Unrealistic Unrealistic Zero Shot
Few Shot
Role Based
Ambiguous Info Ambiguous Info Abstention
CoT
0.2 [0.4 0.6 0.8 1.0] 0.2 [0.4 0.6 0.8 1.0]
Missing Key Info Missing Key Info
Incomp Ques Incomp Ques
Unrelated Unrelated
GPT-4 turbo GPT-4o
Unrealistic Unrealistic
Ambiguous Info Ambiguous Info
0.2 [0.4 0.6 0.8 1.0] 0.2 [0.4 0.6 0.8 1.0]
Missing Key Info Missing Key Info
Incomp Ques Incomp Ques
Unrelated Unrelated
Figure 7: Radar chart depicting the performance of each model across various categories of unanswerable questions,
evaluated under different prompt strategies. This visualization highlights the optimal model-prompt combinations
for handling specific types of unanswerable scenarios.
**6** **Discussion**
This study aims to analyze which GPT model
performs best on unanswerable questions when
used in combination with different prompts. The
prompts neither significantly improved overall accuracy nor influenced the model’s abstention behavior when faced with unanswerable questions.
CoT reasoning improved GPT-4’s performance in
certain categories of questions that is similar to
GPT-4o. Interestingly, a zero-shot or base prompt
often performed well, while few-shot, role-based,
or abstention-specific prompts are less effective.
This can be attributed to GPT’s tendency to hallucinate and force an answer, attempting to provide
a plausible response even for unanswerable questions. In essence, GPT models prioritize making a
question answerable rather than selecting options
like IDK or NOTA.
**7** **Conclusion**
Given the potential for GPT models to be used as
tools for solving mathematical problems in the near
future, the ability to distinguish unanswerable questions becomes a critical feature. Our objective was
to evaluate the performance of GPT models with
various prompts using novel metrics we developed.
We demonstrated how these models often hallucinate to unanswerable questions, even when the
option to abstain is available. Our findings show
that advanced prompts do not significantly improve
this behavior, highlighting the need for models to
better recognize when to abstain from answering
or accurately identify issues in the question.
**8** **Limitations**
We were unable to evaluate models of o1 series
from OpenAI which are among the most recent
-----
and high-performing versions, due to access restrictions. Additionally, we did not explore the niche
prompts commonly employed in other studies on
large language models (LLMs). Another limitation
lies in the dataset itself: the math word problems
we used were not highly complex, and we did not
assess model performance across varying levels of
difficulty. Our evaluation focused solely on word
problems, without extending to other mathematical
categories such as algebra or geometry.
**References**
Zakia Ahmad, Wahid Kaiser, and Sifatur Rahim. 2023.
Hallucinations in chatgpt: An unreliable tool for
learning. Rupkatha Journal on Interdisciplinary Stud_ies in Humanities, 15(4):12._
Janice Ahn, Rishu Verma, Renze Lou, Di Liu, Rui
Zhang, and Wenpeng Yin. 2024. Large language
models for mathematical reasoning: Progresses and
challenges. arXiv preprint arXiv:2402.00157.
Hussam Alkaissi and Samy I McFarlane. 2023. Artificial hallucinations in chatgpt: implications in scientific writing. Cureus, 15(2).
Sotiris Anagnostidis and Jannis Bulian. 2024. How
susceptible are llms to influence in prompts? arXiv
_preprint arXiv:2408.11865._
Jacob Austin, Augustus Odena, Maxwell Nye, Maarten
Bosma, Henryk Michalewski, David Dohan, Ellen
Jiang, Carrie Cai, Michael Terry, Quoc Le, et al. 2021.
Program synthesis with large language models. arXiv
_preprint arXiv:2108.07732._
Nishant Balepur, Abhilasha Ravichander, and Rachel
Rudinger. 2024. Artifacts or abduction: How do
llms answer multiple-choice questions without the
question? arXiv preprint arXiv:2402.12483.
Som S Biswas. 2023. Role of chat gpt in public health.
_Annals of biomedical engineering, 51(5):868–869._
Rishi Bommasani, Drew A Hudson, Ehsan Adeli,
Russ Altman, Simran Arora, Sydney von Arx,
Michael S Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, et al. 2021. On the opportunities and risks of foundation models. arXiv preprint
_arXiv:2108.07258._
Tom B Brown. 2020. Language models are few-shot
learners. arXiv preprint arXiv:2005.14165.
William Cain. 2024. Prompting change: exploring
prompt engineering in large language model ai and
its potential to transform education. _TechTrends,_
68(1):47–57.
Kaiyan Chang, Songcheng Xu, Chenglong Wang,
Yingfeng Luo, Tong Xiao, and Jingbo Zhu. 2024.
[Efficient prompting methods for large language mod-](https://arxiv.org/abs/2404.01077)
[els: A survey. Preprint, arXiv:2404.01077.](https://arxiv.org/abs/2404.01077)
Banghao Chen, Zhaofeng Zhang, Nicolas Langrené,
[and Shengxin Zhu. 2024a. Unleashing the potential](https://arxiv.org/abs/2310.14735)
[of prompt engineering in large language models: a](https://arxiv.org/abs/2310.14735)
[comprehensive review. Preprint, arXiv:2310.14735.](https://arxiv.org/abs/2310.14735)
Kaiping Chen, Anqi Shao, Jirayu Burapacheep, and Yixuan Li. 2024b. Conversational ai and equity through
assessing gpt-3’s communication with diverse social groups on contentious topics. Scientific Reports,
14(1):1561.
Wenhu Chen, Xueguang Ma, Xinyi Wang, and
William W Cohen. 2022. Program of thoughts
prompting: Disentangling computation from reasoning for numerical reasoning tasks. arXiv preprint
_arXiv:2211.12588._
Yang Deng, Yong Zhao, Moxin Li, See-Kiong Ng, and
[Tat-Seng Chua. 2024a. Don’t just say "i don’t know"!](https://arxiv.org/abs/2402.15062)
[self-aligning large language models for responding](https://arxiv.org/abs/2402.15062)
[to unknown questions with explanations. Preprint,](https://arxiv.org/abs/2402.15062)
arXiv:2402.15062.
Yang Deng, Yong Zhao, Moxin Li, See-Kiong Ng, and
Tat-Seng Chua. 2024b. Gotcha! don’t trick me with
unanswerable questions! self-aligning large language
models for responding to unknown questions. arXiv
_preprint arXiv:2402.15062._
Sebastian Farquhar, Jannik Kossen, Lorenz Kuhn, and
Yarin Gal. 2024. Detecting hallucinations in large
language models using semantic entropy. Nature,
630(8017):625–630.
Simon Frieder, Luca Pinchetti, Ryan-Rhys Griffiths,
Tommaso Salvatori, Thomas Lukasiewicz, Philipp
Petersen, and Julius Berner. 2024. Mathematical capabilities of chatgpt. Advances in neural information
_processing systems, 36._
Yangyang Guo, Fangkai Jiao, Zhiqi Shen, Liqiang Nie,
[and Mohan Kankanhalli. 2024. Unk-vqa: A dataset](https://arxiv.org/abs/2310.10942)
[and a probe into the abstention ability of multi-modal](https://arxiv.org/abs/2310.10942)
[large models. Preprint, arXiv:2310.10942.](https://arxiv.org/abs/2310.10942)
Walid Hariri. 2023. Unlocking the potential of chatgpt: A comprehensive exploration of its applications, advantages, limitations, and future directions
in natural language processing. _arXiv preprint_
_arXiv:2304.02017._
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul
Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. 2021. Measuring mathematical problem solving with the math dataset. arXiv preprint
_arXiv:2103.03874._
Jingshan Huang and Ming Tan. 2023. The role of chatgpt in scientific communication: writing better scientific review articles. American journal of cancer
_research, 13(4):1148._
Shima Imani, Liang Du, and Harsh Shrivastava. 2023.
Mathprompter: Mathematical reasoning using large
language models. arXiv preprint arXiv:2303.05398.
-----
Katikapalli Subramanyam Kalyan. 2023. A survey of
gpt-3 family large language models including chatgpt
and gpt-4. Natural Language Processing Journal,
page 100048.
Aobo Kong, Shiwan Zhao, Hao Chen, Qicheng Li,
Yong Qin, Ruiqi Sun, Xin Zhou, Enzhi Wang,
and Xiaohang Dong. 2023. Better zero-shot reasoning with role-play prompting. _arXiv preprint_
_arXiv:2308.07702._
Zihao Li. 2023. The dark side of chatgpt: Legal and
ethical challenges from stochastic parrots and hallucination. arXiv preprint arXiv:2304.14347.
Ryan Lingo. 2023. The role of chatgpt in democratizing
data science: an exploration of ai-facilitated data analysis in telematics. arXiv preprint arXiv:2308.02045.
Yiheng Liu, Tianle Han, Siyuan Ma, Jiayue Zhang,
Yuanyuan Yang, Jiaming Tian, Hao He, Antong Li,
Mengshen He, Zhengliang Liu, et al. 2023a. Summary of chatgpt-related research and perspective towards the future of large language models. Meta_Radiology, page 100017._
Yixin Liu, Avi Singh, C. Daniel Freeman, John D. Co[Reyes, and Peter J. Liu. 2023b. Improving large lan-](https://arxiv.org/abs/2310.10047)
[guage model fine-tuning for solving math problems.](https://arxiv.org/abs/2310.10047)
_Preprint, arXiv:2310.10047._
Yixin Liu, Avi Singh, C Daniel Freeman, John D CoReyes, and Peter J Liu. 2023c. Improving large language model fine-tuning for solving math problems.
_arXiv preprint arXiv:2310.10047._
Phuoc Pham Van Long, Duc Anh Vu, Nhat M. Hoang,
[Xuan Long Do, and Anh Tuan Luu. 2024. Chat-](https://arxiv.org/abs/2312.01661)
[gpt as a math questioner? evaluating chatgpt on](https://arxiv.org/abs/2312.01661)
[generating pre-university math questions. Preprint,](https://arxiv.org/abs/2312.01661)
arXiv:2312.01661.
Jingyuan Ma, Damai Dai, and Zhifang Sui. 2024. Large
language models are unconscious of unreasonability
in math problems. arXiv preprint arXiv:2403.19346.
Nishanth Madhusudhan, Sathwik Tejaswi Madhusudhan, Vikas Yadav, and Masoud Hashemi. 2024a. Do
llms know when to not answer? investigating abstention abilities of large language models. arXiv
_preprint arXiv:2407.16221._
Nishanth Madhusudhan, Sathwik Tejaswi Madhusud[han, Vikas Yadav, and Masoud Hashemi. 2024b. Do](https://arxiv.org/abs/2407.16221)
[llms know when to not answer? investigating ab-](https://arxiv.org/abs/2407.16221)
[stention abilities of large language models. Preprint,](https://arxiv.org/abs/2407.16221)
arXiv:2407.16221.
[OpenAI. 2023. Chatgpt (mar 14 version) [large lan-](https://chat.openai.com/chat)
[guage model].](https://chat.openai.com/chat)
OpenAI. 2024. Openai api documentation. [https:](https://platform.openai.com/docs/)
[//platform.openai.com/docs/. Accessed: 2024-](https://platform.openai.com/docs/)
10-13.
Yikang Pan, Liangming Pan, Wenhu Chen, Preslav
Nakov, Min-Yen Kan, and William Yang Wang. 2023.
On the risk of misinformation pollution with large
language models. arXiv preprint arXiv:2305.13661.
Paulo Shakarian, Abhinav Koyyalamudi, Noel Ngu, and
Lakshmivihari Mareedu. 2023. An independent evaluation of chatgpt on mathematical word problems
(mwp). arXiv preprint arXiv:2302.13814.
Yuhong Sun, Zhangyue Yin, Qipeng Guo, Jiawen Wu,
Xipeng Qiu, and Hui Zhao. 2024. Benchmarking
hallucination in large language models based on
unanswerable math word problem. arXiv preprint
_arXiv:2403.03558._
Shuchang Tao, Liuyi Yao, Hanxing Ding, Yuexiang Xie,
Qi Cao, Fei Sun, Jinyang Gao, Huawei Shen, and
Bolin Ding. 2024. When to trust llms: Aligning
confidence with response quality. _arXiv preprint_
_arXiv:2404.17287._
Leyao Wang, Zhiyu Wan, Congning Ni, Qingyuan Song,
Yang Li, Ellen Wright Clayton, Bradley A Malin, and
Zhijun Yin. 2024a. A systematic review of chatgpt
and other conversational large language models in
healthcare. medRxiv.
Zhengxiang Wang, Jordan Kodner, and Owen Rambow.
2024b. Evaluating llms with multiple problems at
once: A new paradigm for probing llm capabilities.
_arXiv preprint arXiv:2406.10786._
Yousef Wardat, Mohammad A Tashtoush, Rommel
AlAli, and Adeeb M Jarrah. 2023. Chatgpt: A revolutionary tool for teaching and learning mathematics.
_Eurasia Journal of Mathematics, Science and Tech-_
_nology Education, 19(7):em2286._
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou,
et al. 2022. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural
_information processing systems, 35:24824–24837._
Changrong Xiao, Sean Xin Xu, Kunpeng Zhang, Yufang
Wang, and Lei Xia. 2023. Evaluating reading comprehension exercises generated by llms: A showcase
of chatgpt in education applications. In Proceed_ings of the 18th Workshop on Innovative Use of NLP_
_for Building Educational Applications (BEA 2023),_
pages 610–625.
Miao Xiong, Zhiyuan Hu, Xinyang Lu, Yifei Li, Jie
Fu, Junxian He, and Bryan Hooi. 2023. Can llms
express their uncertainty? an empirical evaluation
of confidence elicitation in llms. _arXiv preprint_
_arXiv:2306.13063._
Xin Xu, Tong Xiao, Zitong Chao, Zhenya Huang, Can
Yang, and Yang Wang. 2024a. Can llms solve
longer math word problems better? arXiv preprint
_arXiv:2405.14804._
-----
Yifan Xu, Xiao Liu, Xinghan Liu, Zhenyu Hou, Yueyan
Li, Xiaohan Zhang, Zihan Wang, Aohan Zeng,
Zhengxiao Du, Wenyi Zhao, Jie Tang, and Yux[iao Dong. 2024b. Chatglm-math: Improving math](https://arxiv.org/abs/2404.02893)
[problem-solving in large language models with a self-](https://arxiv.org/abs/2404.02893)
[critique pipeline. Preprint, arXiv:2404.02893.](https://arxiv.org/abs/2404.02893)
Zihao Zhou, Qiufeng Wang, Mingyu Jin, Jie Yao, Jianan
Ye, Wei Liu, Wei Wang, Xiaowei Huang, and Kaizhu
Huang. 2024. Mathattack: Attacking large language
models towards math solving ability. In Proceedings
_of the AAAI Conference on Artificial Intelligence,_
volume 38, pages 19750–19758.
-----
| [
"Asir, Saadat",
"Tasmia Binte, Sogir",
"Md Taukir Azam, Chowdhury",
"Syem, Aziz"
] | 2024-10-16T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2410.13029 | https://arxiv.org/abs/2410.13029 | https://www.semanticscholar.org/paper/f74cfd134de41cbc5782f854b11433abbcf66d92 |
When a language model is optimized for reasoning, does it still show embers of autoregression? An analysis of OpenAI o1 | In "Embers of Autoregression" (McCoy et al., 2023), we showed that several large language models (LLMs) have some important limitations that are attributable to their origins in next-word prediction. Here we investigate whether these issues persist with o1, a new system from OpenAI that differs from previous LLMs in that it is optimized for reasoning. We find that o1 substantially outperforms previous LLMs in many cases, with particularly large improvements on rare variants of common tasks (e.g., forming acronyms from the second letter of each word in a list, rather than the first letter). Despite these quantitative improvements, however, o1 still displays the same qualitative trends that we observed in previous systems. Specifically, o1 -- like previous LLMs -- is sensitive to the probability of examples and tasks, performing better and requiring fewer "thinking tokens" in high-probability settings than in low-probability ones. These results show that optimizing a language model for reasoning can mitigate but might not fully overcome the language model's probability sensitivity. | It is found that o1 substantially outperforms previous LLMs in many cases, with particularly large improvements on rare variants of common tasks, and shows that optimizing a language model for reasoning can mitigate but might not fully overcome the language model's probability sensitivity. | ## When a language model is optimized for reasoning, does it still show embers of autoregression? An analysis of OpenAI o1
**R. Thomas McCoy,[1]** **Shunyu Yao,[2]** **Dan Friedman,[3]** **Mathew D. Hardy,[4]** **Thomas L. Griffiths[5,3]**
1Department of Linguistics and Wu Tsai Institute, Yale University 2OpenAI
3Department of Computer Science, Princeton University
4Roundtable 5Department of Psychology, Princeton University
[email protected], [email protected], [email protected],
[email protected], [email protected]
**Abstract**
In “Embers of Autoregression” (McCoy et al.,
2023), we showed that several large language
models (LLMs) have some important limitations that are attributable to their origins in nextword prediction. Here we investigate whether
these issues persist with o1, a new system from
OpenAI that differs from previous LLMs in
that it is optimized for reasoning. We find that
o1 substantially outperforms previous LLMs
in many cases, with particularly large improvements on rare variants of common tasks (e.g.,
forming acronyms from the second letter of
each word in a list, rather than the first letter). Despite these quantitative improvements,
however, o1 still displays the same qualitative
trends that we observed in previous systems.
Specifically, o1—like previous LLMs—is sensitive to the probability of examples and tasks,
performing better and requiring fewer “thinking tokens” in high-probability settings than
in low-probability ones. These results show
that optimizing a language model for reasoning
can mitigate but might not fully overcome the
language model’s probability sensitivity.
they are trained to take in the start of a piece of
text and probabilistically predict what word will
come next. By considering the probabilistic nature
of this objective, we predicted that LLMs would
be sensitive to both the probability of the text they
need to produce and the commonness of the task
they are being asked to perform. These hypotheses
were supported by a range of experiments. For example, LLMs performed better at reversing a list
of words when the output of the reversal was a
high-probability word sequence than when it was a
low-probability word sequence. Thus, even when
LLMs are being used for tasks that seem very different from next-word prediction, their performance
still shows embers of autoregression—behavioral
patterns that result from the influence of being optimized to perform next-word prediction.
In this work, we analyze a new system from
OpenAI called o1[1] to see whether it also displays
these embers of autoregression. Unlike previous
LLMs, o1 was explicitly optimized to perform reasoning. Thus, it is possible that this departure from
the next-word prediction objective would make o1
less susceptible to the limitations that arise from
next-word prediction. On the other hand, it is likely
that o1’s training involves next-word prediction as
well as reasoning optimization, meaning that o1
may still show the effects that arise from next-word
prediction.
We find that o1 improves substantially over previous LLMs in many of our evaluations, but it still
shows the same qualitative behavioral patterns that
we observed with other LLMs. On the front of example probability, o1 scores substantially better on
examples with high-probability outputs than ones
with low-probability outputs. On the front of task
probability, o1 sometimes scores better on common task variants than rare ones, though these task
frequency effects are less pronounced than in pre
[1https://openai.com/index/learning-to-reaso](https://openai.com/index/learning-to-reason-with-llms/)
[n-with-llms/](https://openai.com/index/learning-to-reason-with-llms/)
**1** **Introduction**
How can we reason about the strengths and limitations of AI systems? In McCoy et al. (2023), we
argue that one productive approach is to analyze the
system through the lens of the pressures that have
shaped it (Marr, 1982; Shepard, 1987; Anderson,
1990; Griffiths, 2020). By considering these pressures, we can make predictions about what strategies the AI system is likely to adopt. Reasoning
about these strategies can then provide hypotheses
about which types of examples the system will be
able to handle well or poorly.
In our prior work, we applied this approach—
which we call the teleological perspective—to
large language models (LLMs). Perhaps the most
significant pressure shaping these systems is their
primary training objective of autoregression (nextword prediction; Elman 1990; Radford et al. 2018):
-----
GPT−3.5 Claude 3 Gemini 1.0
GPT−4 Llama 3 o1−preview
Shift cipher Pig Latin Article swapping Reversal
1.00
0.75
0.50
0.25
0.00
|Shift cipher|Col2|Col3|Col4|
|---|---|---|---|
|||||
|||||
|||||
|||||
|||||
|Pig Latin|Col2|Col3|Col4|
|---|---|---|---|
|||||
|||||
|||||
|||||
|||||
|Article swapping|Col2|Col3|Col4|Col5|Col6|
|---|---|---|---|---|---|
|||||||
|||||||
|||||||
|||||||
|||||||
|ni 1.0 eview|Col2|Col3|Col4|
|---|---|---|---|
|Reversal||||
|||||
|||||
|||||
|||||
|||||
−125 −100 −75 −120 −100 −80 −60−160 −140 −120 −100 −80 −125 −100 −75
Output log probability
Figure 1: Across the four tasks we considered (shift ciphers, Pig Latin, article swapping, and reversal), all six LLMs
evaluated here—including o1—show sensitivity to output probability, with higher accuracies on examples that have
a high output probability than on examples that have a low output probability. The results for all models except o1
are from McCoy et al. (2023). The intervals around the lines show one standard error.
vious LLMs. In addition to assessments based on
accuracy, o1 also provides another way to quantify
difficulty, namely via the number of tokens that it
produces while working toward an answer. This
metric corroborates the results based on accuracy:
o1 uses more tokens to produce its answers for lowprobability examples and rare task variants than it
does for high-probability examples and common
task variants. Overall, then, o1 represents an impressive advance on the types of tasks we consider,
but it has not fully overcome the issues highlighted
in our previous work.
**2** **Background: o1**
et al. (2023). We evaluated on only a subset of the
tasks from McCoy et al. (2023), excluding those
whose datasets involved a large number of examples because o1 has a fairly high cost per example.
**3.1** **Output probability**
The first major effect that we tested for was sensitivity to output probability: Does o1 perform better on
examples for which the answer is a high-probability
string than on examples for which the answer is a
low-probability string? We investigated the effects
of output probability across four tasks: decoding
shift ciphers (a simple type of cipher), decoding
messages expressed in Pig Latin, article swapping
(swapping certain words in a sequence with the
words before them), and reversing a list of words.
As shown in Figure 1, o1—like the other LLMs
illustrated there—shows clear effects of output
probability. For example, in the shift cipher task, its
accuracy ranges from 47% in the lowest-probability
case to 92% in the highest-probability case. Although o1 shows the same qualitative trend as other
LLMs, it often outperforms them quantitatively,
with particularly strong results in the article swapping task.
In addition to evaluating accuracy, we also noted
how many tokens were used by o1 to answer its
queries (Figure 2). Across all four tasks, o1 tended
to use more tokens for low-probability examples
than high-probability ones, further corroborating
the conclusion that low-probability cases are harder
for o1 than high-probability cases.
**3.2** **Task frequency**
The exact details of how o1 works are not publicly
available, but a general description of its operation
[is available at https://openai.com/index/lea](https://openai.com/index/learning-to-reason-with-llms/)
[rning-to-reason-with-llms/. o1 is trained via](https://openai.com/index/learning-to-reason-with-llms/)
reinforcement learning to solve reasoning problems
using a chain of thought (Nye et al., 2021; Wei
et al., 2022; Kojima et al., 2022), in which it breaks
the problem down into steps before producing the
final answer. The user is only given the final answer
(not the chain of thought), but the number of tokens
inside the chain of thought is provided, so we can
tell how long the chain of thought was even though
we cannot tell what its contents were; below we
have some analyses based on these counts of socalled “thinking tokens.”
**3** **Results**
The version of o1 used for all results below is
o1-preview-2024-09-12, which we tested with
its default settings. For detailed descriptions of the
tasks and datasets that are evaluated on, see McCoy
The other major effect that we tested for was sensitivity to task frequency: Does o1 perform better
-----
o1−preview
Shift cipher Pig Latin Article swapping Reversal
6000
4000
2000
|Shift cipher|Col2|Col3|Col4|
|---|---|---|---|
|||||
|||||
|||||
|||||
|||||
|Pig Latin|Col2|Col3|Col4|
|---|---|---|---|
|||||
|||||
|||||
|||||
|||||
|preview|Col2|Col3|Col4|Col5|Col6|
|---|---|---|---|---|---|
|Article swapping||||||
|||||||
|||||||
|||||||
|||||||
|||||||
|||||||
|Reversal|Col2|Col3|Col4|
|---|---|---|---|
|||||
|||||
|||||
|||||
|||||
|||||
−125 −100 −75 −120 −100 −80 −60−160 −140 −120 −100 −80 −125 −100 −75
Output log probability
Figure 2: o1 tends to use more tokens when processing examples that have low-probability answers than examples
that have high-probability answers. The plots show the median number of tokens that o1 used for each group of
examples.
on task variants that occur frequently in training
data (e.g., sorting a list into alphabetical order)
than rarer variants of those tasks (e.g., sorting a
list into reverse alphabetical order)? For this set of
experiments, we considered five task types, with a
common and rare variant for each one: decoding
messages written in shift ciphers, encoding messages into Pig Latin, forming acronyms, applying
a linear function, and sorting a list.
We find that o1 performs substantially better than
the other LLMs on the rare task variants (Figure 3,
left). Further, although all other LLMs show stark
differences between the rare and common versions
of at least some tasks, o1 achieves similar scores
between the two members of each pair. These results suggest that o1 might not be sensitive to task
frequency, but it is difficult to draw definitive conclusions because o1’s strong performance might
be producing ceiling effects. That is, even if o1 is
sensitive to task frequency, the datasets used here
might not be challenging enough for the effects to
be evidenced.
To address the possibility of ceiling effects, we
investigated more challenging versions of two of
the tasks. First, the sorting tasks involve sorting
a list of words into alphabetical order (the common variant) or reverse alphabetical order (the rare
variant). We made sorting more challenging by
having all words in the list start with the same
letter—namely, i—so that finding the right ordering requires considering at least the first two letters
of each word, whereas previously it was usually
sufficient to only consider the first letter. In this
harder version of sorting, o1 now performs substantially better on the common version of the task
than the rare one (Figure 3, top right). Second, the
shift cipher tasks involve decoding a message written in a simple cipher, where the cipher involves
shifting each letter forward in the alphabet either
13 positions (the common variant) or 12 positions
(the rare variant). To modulate difficulty in this
case, we used examples whose target outputs varied in probability, since we have established that
lower-probability cases tend to be harder for o1. Although o1 performs similarly on the common and
rare task variants in the highest-probability case, its
performance in the medium-probability and lowprobability settings is higher for the common task
variant than the rare one (Figure 3, bottom right).
These additional experiments therefore show that
o1 is sensitive to task frequency in at least some
cases—but this trend may only be observable when
the examples are challenging enough for bring o1’s
performance substantially below 100% accuracy.
Finally, plotting the number of tokens that o1
uses for each task variant reveals additional evidence that rare task variants can be harder for o1
than common task variants (Figure 4). Specifically,
for both shift cipher decoding and acronyms, o1
uses far more tokens for the rare task variant than
the common one. Notably, for both of these tasks,
accuracy is nearly identical for both task variants;
e.g., for acronyms, o1 achieved 100% accuracy on
the common variant and 99.9% accuracy on the
rare variant. These cases therefore show that it is
possible for o1 to display a difference in difficulty
as quantified by the number of tokens that are used
even when the relevant accuracies show no variation. Although shift cipher decoding and acronyms
both showed large differences in token quantities
between the two task variants, the other three task
types had almost identical token usage between
-----
|Shift cipher decoding|Col2|Col3|
|---|---|---|
||||
||||
|Pig Latin encoding|Col2|Col3|Col4|Col5|
|---|---|---|---|---|
||||||
||||||
|Acronyms|Col2|Col3|Col4|Col5|
|---|---|---|---|---|
||||||
||||||
|Linear function|Col2|Col3|Col4|Col5|
|---|---|---|---|---|
||||||
||||||
|Sorting|Col2|Col3|Col4|Col5|Col6|
|---|---|---|---|---|---|
||||||GPT−3.5|
|||||||
|||||||
|Col1|Col2|Col3|Col4|Col5|Claude 3|
|---|---|---|---|---|---|
||||||Llama 3|
|||||||
|Shift cipher decoding|Col2|Col3|Col4|
|---|---|---|---|
|decoding||||
|||||
|||||
|||Co|mmon|
|||Ra|re|
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Llama 3|
|---|---|---|---|---|---|---|---|
||||||||Gemini 1.0|
|||||||||
||||||||o1−preview|
|||||||||
|Col1|Sorting (harder) 1.00 0.75 Accuracy o1−preview 0.50 0.25 0.00 Rare on m Com|Sorting (harder)|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|
|---|---|---|---|---|---|---|---|---|---|---|
||||||||||o1−preview||
||||||||||||
||||||||||||
||||||||||||
|Shift cipher decoding 1.00 0.75 Accuracy 0.50 0.25 Common Rare 0.00 −125 −100 −75 Output log probability|||||||||||
Shift cipher Pig Latin Linear
Acronyms Sorting
decoding encoding function
1.0
0.5
GPT−3.5
0.0
1.0
0.5
GPT−4
0.0
1.0
0.5
Claude 3
0.0
1.0
Accuracy
0.5
Llama 3
0.0
1.0
0.5
Gemini 1.0
0.0
1.0
0.5
o1−preview
0.0
RareCommon RareCommon RareCommon RareCommon RareCommon
Figure 3: Left: We evaluated LLMs on two variants of five tasks—a variant that is common in Internet text (e.g.,
forming acronyms from the first letter of each word in a sequence) and a variant that is rare (e.g., forming acronyms
from the second letter of each word in a sequence). On these datasets, the five LLMs other than o1 showed much
higher accuracy on the common variants than the rare ones, but o1 showed similar performance on common and
rare variants. The results for models other than o1 are from McCoy et al. (2023). Top right: On datasets based on
challenging sorting tasks, o1 performs better on the common type of sorting (i.e., sorting into alphabetical order)
than on the rare type of sorting (i.e., sorting into reverse alphabetical order). Bottom right: When decoding shift
ciphers, o1 shows roughly the same performance on the common cipher type and on the rare cipher type when the
examples are ones with a high output probability. However, when it is instead evaluated on examples with medium
or low probability, its accuracy is higher for the common cipher type than the rare one. The error intervals in all
plots show one standard error.
Tokens used
|Shift cipher decoding|Col2|Col3|Col4|Col5|Col6|Col7|
|---|---|---|---|---|---|---|
||||||||
||||||||
||||||||
||||||||
||||||||
||||||||
||||||||
|Pig Latin encoding|Col2|Col3|Col4|Col5|Col6|Col7|
|---|---|---|---|---|---|---|
||||||||
||||||||
||||||||
||||||||
||||||||
||||||||
||||||||
|Acronyms|Col2|Col3|Col4|Col5|Col6|Col7|
|---|---|---|---|---|---|---|
||||||||
||||||||
||||||||
||||||||
||||||||
||||||||
|Linear function|Col2|Col3|Col4|Col5|Col6|Col7|
|---|---|---|---|---|---|---|
||||||||
||||||||
||||||||
||||||||
||||||||
|Sorting|Col2|Col3|Col4|Col5|Col6|Col7|
|---|---|---|---|---|---|---|
||||||||
||||||||
||||||||
||||||||
||||||||
||||||||
|Shift cipher decoding 3000 2000 1000 0 are on R om m C|Pig Latin encoding 1500 1000 500 0 are on R om m C|Acronyms 200 100 0 are on R om m C|Linear function 200 100 0 are on R om m C|Sorting 900 600 300 0 are on R om m C|
|---|---|---|---|---|
Figure 4: In some cases—namely, for shift ciphers and acronyms—o1 consumes more tokens when performing a
common task variant than a rare task variant. For the other task pairs, the number of tokens it consumes is similar
across both task frequency levels. The bars show the median number of tokens used within each group of examples.
Note that the vertical axes have different scales in each plot.
-----
variants, showing that differences in task frequency
are only sometimes associated with differences in
token usage.
Overall, o1 shows substantially less sensitivity to
task frequency than the other LLMs we have previously identified. However, there still is evidence of
task frequency effects in some cases, namely when
the tasks are made more challenging and when we
consider the number of tokens consumed by o1.
We therefore conclude that o1 can be substantially
influenced by task frequency.
**4** **Conclusion**
On many of the tasks we considered, o1 performed
substantially better than the LLMs we had previously evaluated, with particularly strong results on
rare variants of common tasks. However, it still
qualitatively showed both of the central types of
probability sensitivity discussed in McCoy et al.
(2023): sensitivity to output probability and sensitivity to task frequency.
These results are consistent with the teleological perspective that we have argued for. On one
hand, o1 is explicitly optimized for reasoning, so
we should expect it to perform well on the sorts of
algorithmic tasks that we have considered—as it
indeed does. On the other hand, although this is
not explicitly stated in the o1 documentation as far
as we can tell, o1 also probably went through a substantial amount of training on next-word prediction,
such that we would expect it to display the behavioral signatures that go with being optimized for
next-word prediction—and we have indeed found
that it does so. These results support the view that
developing a complete teleological analysis of an
AI system requires consideration of all types of
optimization that have been applied to that system.
We see two potential aspects of o1 that might
give rise to the probability sensitivity we have observed. First, probability sensitivity might arise
during the process of generating text, for the same
reasons as it does in other types of LLMs—the generation process in any system optimized for statistical prediction is expected to be biased toward highprobability text. Indeed, Prabhakar et al. (2024)
showed that LLMs using chain-of-thought reasoning are susceptible to probability effects when generating text, so it would not be surprising if the
process of generation produces similar effects in
the hidden chains of thought produced by o1. Second, it might be that the process of developing a
chain of thought could also introduce biases toward
high-probability scenarios: if o1’s task is viewed
as considering multiple potential chains of thought
and deciding between them, this decision might
be at least partially influenced by probability (e.g.,
favoring chains that produce higher-probability answers because those answers are judged as more
plausible), which would introduce biases favoring
high-probability text (or would enhance those biases if they are already present).
It is not clear what modeling enhancements
would suffice to fully overcome the limitations that
we have highlighted. One potential solution would
be to incorporate model components that do not
involve probabilistic judgments in any way, such as
modules that execute Python code. For now at least,
the sparks of AGI (Bubeck et al., 2023) that LLMs
may be producing continue to be accompanied by
embers of autoregression.
**Competing interests**
S.Y. is employed by OpenAI, but this work is an
addendum to a project that was completed before
he started at OpenAI. Though this paper includes
some speculation about how o1 works, S.Y. did not
contribute to these parts of the paper, so the paper
should not be viewed as providing any information about how o1 works beyond what is publicly
available.
**References**
John R Anderson. 1990. The Adaptive Character of
_Thought. Erlbaum._
Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg,
Harsha Nori, Hamid Palangi, Marco Tulio Ribeiro,
and Yi Zhang. 2023. Sparks of artificial general intelligence: Early experiments with GPT-4. arXiv
_preprint arXiv:2303.12712._
Jeffrey L Elman. 1990. Finding structure in time. Cog_nitive Science, 14(2):179–211._
Thomas L Griffiths. 2020. Understanding human intelligence through human limitations. TiCS, 24(11):873–
883.
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large
language models are zero-shot reasoners. NeurIPS,
35:22199–22213.
David Marr. 1982. Vision. W.H. Freeman.
-----
R. Thomas McCoy, Shunyu Yao, Dan Friedman,
Mathew Hardy, and Thomas L. Griffiths. 2023. Embers of autoregression: Understanding large language
models through the problem they are trained to solve.
_arXiv preprint arXiv:2309.13638._
Maxwell Nye, Anders Johan Andreassen, Guy Gur-Ari,
Henryk Michalewski, Jacob Austin, David Bieber,
David Dohan, Aitor Lewkowycz, Maarten Bosma,
David Luan, Charles Sutton, and Augustus Odena.
2021. Show your work: Scratchpads for intermediate
computation with language models. arXiv preprint
_arXiv:2112.00114._
Akshara Prabhakar, Thomas L. Griffiths, and R. Thomas
McCoy. 2024. Deciphering the factors influencing the efficacy of chain-of-thought: Probability,
memorization, and noisy reasoning. arXiv preprint
_arXiv:2407.01687._
Alec Radford, Karthik Narasimhan, Tim Salimans, and
Ilya Sutskever. 2018. Improving language understanding by generative pre-training.
Roger N. Shepard. 1987. Toward a universal law of
generalization for psychological science. Science,
237(4820):1317–1323.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and
Denny Zhou. 2022. Chain-of-thought prompting elicits reasoning in large language models. NeurIPS,
35:24824–24837.
-----
| [
"Shunyu, Yao",
"Dan, Friedman",
"R. Thomas, McCoy",
"Mathew D., Hardy",
"Thomas L., Griffiths"
] | 2024-10-02T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2410.01792 | https://arxiv.org/abs/2410.01792 | https://www.semanticscholar.org/paper/5ad254fa8e53174635c3dd02adc408e581d29109 |
When and How Does Synthetic Data Improve Reasoning Capabilities of Language Models? | Training on model-generated synthetic data is a promising approach for finetuning LLMs, but it remains unclear when it helps or hurts. In this paper, we investigate this for reasoning problems via an empirical study, followed by a theoretical formalization of our observations. First, we find that while the typical approach of finetuning a model on synthetic correct or positive problem-solution pairs generated by capable models offers modest performance gains, sampling more correct solutions from the finetuned learner doubles the sample efficiency of synthetic data. At the same time, training on model-generated positives can amplify various spurious correlations, resulting in flat or even inverse scaling trends as the amount of data increases. Surprisingly, we find that several of these issues can be addressed if we also utilize negative responses, i.e. model-generated responses that are deemed incorrect via final answer checking. Crucially, these negatives must be constructed such that the training can appropriately recover the utility or credit of each intermediate step in the negative response. With this per-step scheme, we are able to attain consistent gains over only positive data, attaining performance similar to amplifying the amount of synthetic data by 8x. We show that training on per-step negatives can help to unlearn spurious correlations in the positive data, and is equivalent to advantage-weighted reinforcement learning (RL), implying that it inherits benefits of RL over imitating positive data alone. | null | null | null | null | NeurIPS 2024 | true | 0 | 0 | null | https://neurips.cc/virtual/2024/poster/96295 | null | null |
Wrong-of-Thought: An Integrated Reasoning Framework with Multi-Perspective Verification and Wrong Information | Chain-of-Thought (CoT) has become a vital technique for enhancing the performance of Large Language Models (LLMs), attracting increasing attention from researchers. One stream of approaches focuses on the iterative enhancement of LLMs by continuously verifying and refining their reasoning outputs for desired quality. Despite its impressive results, this paradigm faces two critical issues: (1) Simple verification methods: The current paradigm relies solely on a single verification method. (2) Wrong Information Ignorance: Traditional paradigms directly ignore wrong information during reasoning and refine the logic paths from scratch each time. To address these challenges, we propose Wrong-of-Thought (WoT), which includes two core modules: (1) Multi-Perspective Verification: A multi-perspective verification method for accurately refining the reasoning process and result, and (2) Wrong Information Utilization: Utilizing wrong information to alert LLMs and reduce the probability of LLMs making same mistakes. Experiments on 8 popular datasets and 5 LLMs demonstrate that WoT surpasses all previous baselines. In addition, WoT exhibits powerful capabilities in difficult computation tasks. | Wrong-of-Thought (WoT) is proposed, which includes two core modules: Multi-Perspective Verification: A multi-perspective verification method for accurately refining the reasoning process and result, and Wrong Information Utilization: Utilizing wrong information to alert LLMs and reduce the probability of LLMs making same mistakes. | ## Wrong-of-Thought: An Integrated Reasoning Framework with Multi-Perspective Verification and Wrong Information
**Yongheng Zhang[1]** **Qiguang Chen[2]** **Jingxuan Zhou[1]** **Peng Wang[1]**
**Jiasheng Si[3]** **Jin Wang[4]** **Wenpeng Lu[3]** **Libo Qin[1][*]**
1School of Computer Science and Engineering, Central South University, China
2Research Center for SCIR, Harbin Institute of Technology, Harbin, China
3Key Laboratory of Computing Power Network and Information Security, Ministry of Education
Qilu University of Technology (Shandong Academy of Sciences), China
4Yunnan University, Kunming, China
Chain-of-Thought (CoT) has become a vital
technique for enhancing the performance of
Large Language Models (LLMs), attracting
increasing attention from researchers. One
stream of approaches focuses on the iterative
enhancement of LLMs by continuously verifying and refining their reasoning outputs for
desired quality. Despite its impressive results,
this paradigm faces two critical issues: (1) Sin_gle verification method: The current paradigm_
relies solely on a single verification method. (2)
_Wrong Information Ignorance: The traditional_
paradigm directly ignores wrong information
during reasoning and refines the logic paths
from scratch each time. To address these challenges, we propose Wrong-of-Thought (WoT),
which includes two core modules: (1) Multi_Perspective Verification: A multi-perspective_
verification method for accurately refining the
reasoning process and result, and (2) Wrong In_formation Utilization: Utilizing wrong informa-_
tion to alert LLMs and reduce the probability
of LLMs making same mistakes. Experiments
on 8 popular datasets and 5 LLMs demonstrate
that WoT surpasses all previous baselines. In
addition, WoT exhibits powerful capabilities in
difficult computation tasks.
**Introduction**
**_Initial_**
**_Single Verification_**
**_Re-reasoning_**
Input **_1_** Assertion Verification
Output
**_Reasoning_** **_(a) Previous Method_**
**_Multi-Perspective_**
**_Verification_** **_Wrong Information_**
**_1_** Assertion Verification **_Re-reasoning_**
**_2_** Process Verification
Output
**_First_** **_3_** Result Verification
**_Reasoning_**
**_(b) Wrong-of-Thought (WoT)_**
Figure 1: Previous multi-thoughts integration methods (a) vs. Wrong-of-Thought (b). Previous methods only used a Single Verification and did not utilize
the wrong information. In contrast, WoT offers Multi_Perspective Verification and utilizes Wrong Information._
A common category of CoT research focuses
on iteratively enhancing LLMs by continuously reverifying and refining corresponding reasoning outputs to achieve better quality. Madaan et al. (2023)
demonstrate this approach by prompting the model
to self-verify the results and provide feedback on
previously generated drafts, producing better outputs. Similarly, Chen et al. (2023) improve code
debugging by leveraging external program execution results and model explanation code. When
examining methodologies to guide the rethinking
of models, Zheng et al. (2024) emphasize the reuse
of previously generated answers. Meanwhile, Qi
et al. (2023) introduce a problem-solving framework inspired by human divide-and-conquer strategies, which incorporates self-questioning and recursive thinking processes. Building upon this, Liu
et al. (2023b) propose XoT, shown as Figure 1 (a),
which integrates multiple reasoning paths with multiple logical modes. Specifically, they generate the
rationale in program format and apply a single verification method to check the correctness of the
reasoning. If errors are detected, the LLMs are
instructed to switch to another reasoning thought
_Failure is the mother of success._
_– Chinese Idiom_
In recent years, large language models (LLMs)
have made significant advancements in a series of
natural language processing tasks (Achiam et al.,
2023; Touvron et al., 2023). Additionally, with the
emergence of Chain-of-Thought (CoT) (Wei et al.,
2022), the performance of LLMs has been further
unlocked by guiding them through step-by-step
reasoning (Liu et al., 2023a; Qin et al., 2023).
- Corresponding Author.
-----
and start the reasoning process from scratch. Despite achieving impressive results, they still face
two significant challenges:
(1) Single verification method: They rely solely
on the single verification method like basic
syntax assertions, resulting in errors that fail
to fully evaluate and validate the reasoning of
models. This approach leads to suboptimal
verification accuracy, significantly impeding
overall reasoning performance.
(2) Wrong Information Ignorance: Once the error
is detected, they typically disregard wrong information and re-generate the reasoning from
scratch. However, it also loses a large amount
of feedback signals brought by error information, which is often considered very important
in model understanding (Zhang et al., 2024a;
Tong et al., 2024; Chen et al., 2024b).
Motivated by this, we introduce the Wrong-ofThought (WoT) framework, as illustrated in Figure 1 (b). To address the first challenge, we introduce Multi-Perspective Verification, which incorporates two additional explicit verification methods,
mirroring human problem-solving processes. First,
it ensures the variables in equations or code match
the information provided in the question. Second,
it resolves again the question to check for consistency in the results. We instruct LLMs to integrate
these two perspectives to enhance solution verification. To address the second challenge, we introduce
_Wrong Information Utilization, which utilizes pre-_
vious wrong reasoning information to guide LLMs
in avoiding similar mistakes. By referencing past
mistakes, LLMs can enhance their reasoning performance and minimize repetitive errors.
Experiments are conducted on 8 datasets and 5
LLMs. The results indicate that the WoT performs
exceptionally well across all benchmark tests, surpassing all existing baselines. Furthermore, indepth analytical experiments demonstrate that WoT
excels at difficult computational tasks.
The key contributions of this work are:
(1) We first point out two main drawbacks in iterative reasoning, which lie in the monotonous
verification perspective and the ignorance of
wrong information feedback for ultimate limited logical improvement.
(2) We introduce Wrong-of-Thought (WoT) to
solve these drawbacks, which mainly con
**_Single Verification_**
Input
**Planner** **PoT** **_1_** Assertion Verification
First **(PoT/EoT)** **Solver**
Switch Return
**_Single Verification_**
Input
**_1_** Assertion Verification
**EoT**
Second **Solver**
Switch Return
Input Return
Third **SolverCoT**
Figure 2: XoT Framework. First, select a reasoning
method, either PoT or EoT and then apply assertion verification to make a judgment. If the reasoning is found to
be incorrect, switch to the alternative method and restart
the reasoning. Verify again, and if the verification is
correct, return the answer. If the reasoning reaches the
third step, utilize CoT reasoning as the answer.
sists of two modules: Multi-Perspective Ver_ification and Wrong Information Utilization._
These modules enable accurate verification
and effective utilization of wrong information.
(3) Our experiments on 8 datasets and 5 LLMs
have shown that WoT achieves superior performance. In addition, WoT demonstrates strong
problem-solving abilities in questions involving difficult mathematical reasoning.
All code will be open-sourced and publicly available at [https://github.com/](https://github.com/BRZ911/Wrong-of-Thought)
[BRZ911/Wrong-of-Thought.](https://github.com/BRZ911/Wrong-of-Thought)
**2** **Preliminary**
This section introduces the framework that mainstream integrated multiple reasoning thoughts, iteratively enhancing LLMs by continuously reverifying and refining corresponding reasoning.
XoT (Liu et al., 2023b), as shown in Figure 2, is
an integrated reasoning framework that combines
three reasoning modes: Program-of-Thought (PoT)
(Chen et al., 2022), Equation-of-Thought (EoT)
(Liu et al., 2023b), and Chain-of-Thought (CoT)
(Wei et al., 2022). PoT enables LLMs to generate Python code and then uses the external Python
executor to run the results. EoT involves LLMs
generating mathematical equations, which are then
solved using an external calculator. CoT is a technique that guides LLMs to reason step-by-step.
-----
|Question|Fred has 5 cards. Lisa bought 3 of Fred's cards. How many cards does Fred have now?|
|---|---|
|(§3.1) Planning and Solving Reasoning Process Plan fred_cards = x Result lisa_cards = 3 EoT 8 fred_cards - lisa_cards = 5 Planner EoT External Solver ans = fred_cards Executor Wrong Information Delivery External Reasoning Process PoT Executor fred_cards = 5 Solver Wrong Reasoning: fred_cards = x ... lisa_cards = 3 fred_cards_now = The Wrong Information has fred_cards - lisa_cards been provided, avoid the ans = fred_cards_now same mistake! (§3.3) Wrong Information Utilization|(§3.2) Multi-Perspective Verification Assertion Please check whether the variables is Verification correct using an Assertion Statement. Verifier Process Please verify that the variables in this Current Process correspond one-to- Verification one with the information in the question. Verifier Result Please answer this question from the beginning and verify the correctness Verification of Current Result. Verifier|
|---|---|
|Answer|2|
|---|---|
**2**
Figure 3: Overview of the Wrong-of-Thought (WoT) framework, incorporating three core modules: Planning and
_Solving (§3.1), Multi-Perspective Verification (§3.2), and Wrong Information Utilization (§3.3)._
In the XoT framework, the First step involves
initiating the reasoning plan and selecting a reasoning method from either EoT or PoT to perform
the reasoning. Once the reasoning process is completed, the result is computed through an external
executor. The answer is then verified using an
assertion verification. If the reasoning result is determined to be correct, the answer is returned. If
the initial reasoning is deemed incorrect and abandoned, the Second step is to switch to an alternative
reasoning mode and restart the process. After obtaining and verifying the new reasoning answer, if
it is still incorrect, the Third step is to directly use
CoT reasoning as the final answer.
**3** **Wrong-of-Thought**
This section introduces Wrong-of-Thought (WoT).
The content is divided into three parts: Planning
_and Solving (§3.1), Multi-Perspective Verification_
(§3.2), and Wrong Information Utilization (§3.3).
**3.1** **Planning and Solving**
Following XoT (Liu et al., 2023b), as shown in Figure 3 (§3.1), initially, a planner selects a reasoning
method from either EoT or PoT based on the inputted question. After the Solver module generates
the reasoning process, an external executor computes the result, yielding a preliminary solution.
The next step is to validate the current solution.
**3.2** **Multi-Perspective Verification**
To address the challenge of verification methods
being singular and significantly hindering the overall performance, we propose a Multi-Perspective
_Verification (MPV), as shown in Figure 3 (§3.2)._
Specifically, Multi-Perspective Verification is applicable to the reasoning verification of EoT and PoT,
which includes the following three aspects:
(1) Assertion Verification: We adopt the verification method from the XoT (Liu et al., 2023b).
We use LLMs to identify the intermediate
variables in the solution and format them as
Assertion Statements . These assertion statements are then executed using external tools
to obtain the verification results.
(2) Process Verification: For process verification, we provide the LLMs only with the
Current Process, excluding the computed results. We ask the LLMs to recheck each step
of the current reasoning process to ensure that
the variables in the solution equations or code
correspond one-to-one with the question information, explicitly demonstrating the verification reasoning process.
(3) Result Verification: In the results verification
phase, we provide the LLMs with both the
current reasoning process and the computed
results. We instruct the LLMs to recheck the
Current Result by re-solving the problem. If
the result passes re-verification, the LLMs output “right”; otherwise, they output “error”.
This explicitly demonstrates the verification
reasoning results.
To enhance the robustness of our verification,
we employ a voting mechanism to select the judgments that exhibit higher consistency across differ
-----
ent verification perspectives Vi. These consistent
judgments are then used as the final MVP results _V[ˆ]_
for the output R of the reasoning method Mi. The
verification can be formalized as follows:
The effectiveness of the WoT framework was validated on these challenging benchmarks.
Additionally, we select the single reasoning
methods CoT (Wei et al., 2022), PoT (Chen et al.,
2022), and EoT (Liu et al., 2023b), as well as the
ensemble method XoT (Liu et al., 2023b), as baselines. The verification process was conducted on
a comprehensive set of five LLMs. Among these,
three are open-source LLMs: Mistral-7B-Instruct
(Jiang et al., 2023), Qwen-7B-Chat (Bai et al.,
2023), and Qwen-14B-Chat (Bai et al., 2023). The
other two LLMs are closed-source: Gemini-1.0-Pro
(Team et al., 2023) and GPT-3.5-Turbo (OpenAI,
2022). These models were selected to provide a
diverse representation of current advanced LLMs,
both open and closed-source, ensuring a robust and
comprehensive verification.
Following XoT (Liu et al., 2023b), all experiments used 8-shot correct examples as prompts.
The experimental results were evaluated using Accuracy as the evaluation metric. The top-p and temperature parameters for all experiments were set
to LLMs default parameters in the official model
configuration, which are within the range of [0,1].
**4.2** **Main Results**
The main experimental results are shown in Table 1.
Based on the results, we can observe:
**(1) WoT reaches superior performance.** WoT
surpasses all baselines, achieving superior performance on eight datasets, with an average improvement of 2.8% compared to XoT across five LLMs.
This extensive experimental result demonstrates the
effectiveness of the integration of Multi-Perspective
_Verification and Wrong Information Utilization in_
WoT, enhancing overall performance.
**(2) WoT can also work on LLMs with smaller**
**parameters. WoT achieves an average improve-**
ment of 4.2% and 2.3% on the smaller parameter open-source models, Mistral-7B-Instruct and
Qwen1.5-7B-Chat, respectively, demonstrating robust performance. The ability of WoT to maintain
high performance on models with fewer parameters highlights its potential for broad applicability
in various practical scenarios, including those with
limited computational resources.
**(3) WoT demonstrates a powerful ability to solve**
**difficult reasoning questions. WoT achieves an**
average performance on GSM-Hard that was 5.7%
higher than the baselines on five LLMs, represent
1(Vt = R), (1)
_RX∈Mi_
_Vˆ = argmax_
_Vt∈V_
_t=1_
where Vt represents verification methods, V represents the set of the three verification methods, R
represents the output using the reasoning method
_Mi, and 1(Vt = R) returns 1 if the verification_
method Vt matches output R, and 0 otherwise.
**3.3** **Wrong Information Utilization**
To address the issue of previous methods ignoring
wrong information, we propose Wrong Informa_tion Utilization (WIU), as shown in Figure 3 (§3.3)._
Specifically, after the previous solution is validated
and determined to be wrong, we incorporate the
prior Wrong Information within the context of the
current solution method. This guides the LLMs to
avoid repeating the same mistakes. Formally, the
reasoning for the question Q after utilizing wrong
reasoning information can be expressed by the following formula:
_Rˆ = argmaxR∈Mi_ _P_ (R|Q, I, WI), (2)
where _R[ˆ] represents the final reasoning result._
_P_ (R|Q, I, WI) denotes the probability of generating the reasoning path R under the conditions
of question Q, prompt I, and Wrong Information
WI. R is a reasoning of the reasoning method Mi.
After obtaining the reasoning results, we use the
_Multi-Perspective Verification to make a judgment._
If the judgment is correct, the answer is returned
directly. If the judgment is wrong, following XoT,
we proceed to the third step, where the errors from
this step and the previous step will be used as wrong
examples for CoT reasoning.
**4** **Experiments**
**4.1** **Experimental Setting**
We conduct experiments on eight widely used comprehensive datasets, including GSM8K (Cobbe
et al., 2021), GSM-Hard (Gao et al., 2023), Algebra (He-Yueya et al., 2023), MultiArith (Roy and
Roth, 2015), SingleEQ (Koncel-Kedziorski et al.,
2015), SingleOP (Roy et al., 2015), AddSub (Hosseini et al., 2014), and SVAMP (Patel et al., 2021).
-----
Method GSM-hard GSM8K Algebra MultiArith SingleEQ SingleOP AddSub SVAMP Average
_Mistral-7B-Instruct (Jiang et al., 2023)_
CoT (Wei et al., 2022) 16.6 47.5 36.0 68.8 78.3 81.1 73.9 60.8 57.9
PoT (Chen et al., 2022) 30.8 45.0 28.4 72.8 75.8 64.4 74.7 56.5 56.0
EoT (Liu et al., 2023b) 16.1 22.3 27.0 25.0 31.1 33.6 29.1 23.5 26.0
XoT (Liu et al., 2023b) 26.2 52.8 46.8 77.8 86.6 85.4 80.0 67.9 65.5
Wrong-of-Thought **36.7** **54.6** **50.5** **80.8** **88.0** **87.9** **88.9** **70.0** **69.7**
_Qwen-7B-Chat (Bai et al., 2023)_
CoT (Wei et al., 2022) 18.6 52.8 43.7 83.2 87.4 83.1 80.5 70.7 65.0
PoT (Chen et al., 2022) 39.0 56.2 38.7 84.8 90.6 89.7 82.5 71.3 69.1
EoT (Liu et al., 2023b) 35.3 49.2 34.2 61.5 76.0 63.5 65.1 48.0 54.1
XoT (Liu et al., 2023b) 38.3 61.8 54.5 88.7 92.1 92.3 85.1 76.4 73.6
Wrong-of-Thought **42.0** **63.7** **57.2** **91.3** **94.1** **93.6** **86.3** **79.3** **75.9**
_Qwen-14B-Chat (Bai et al., 2023)_
CoT (Wei et al., 2022) 31.0 63.4 56.8 89.8 88.0 85.4 85.3 80.8 72.6
PoT (Chen et al., 2022) 57.1 69.5 62.6 95.7 95.7 96.1 86.8 81.6 80.6
EoT (Liu et al., 2023b) 57.6 68.5 62.6 85.7 90.6 82.2 83.8 79.2 76.3
XoT (Liu et al., 2023b) 55.3 76.3 80.2 92.0 94.1 94.5 86.1 84.8 82.9
Wrong-of-Thought **60.6** **77.5** **81.5** **98.3** **96.7** **95.4** **88.1** **86.3** **85.5**
_Gemini-1.0-Pro (Team et al., 2023)_
CoT (Wei et al., 2022) 45.6 81.9 81.5 94.8 96.1 94.7 92.9 83.0 83.8
PoT (Chen et al., 2022) 63.8 77.1 58.1 96.3 96.3 96.3 91.6 87.1 83.3
EoT (Liu et al., 2023b) 52.2 61.1 63.5 80.0 79.7 75.3 78.0 71.3 70.1
XoT (Liu et al., 2023b) 64.6 82.1 83.3 96.5 96.1 96.3 91.4 86.9 87.2
Wrong-of-Thought **69.1** **84.4** **85.6** **97.3** **97.4** **97.3** **93.4** **89.2** **89.2**
_GPT-3.5-Turbo (OpenAI, 2022)_
CoT (Wei et al., 2022) 42.2 80.0 72.1 97.3 96.5 94.7 89.4 80.2 81.5
PoT (Chen et al., 2022) 70.3 77.4 81.5 97.8 98.6 94.3 88.9 79.2 86.0
EoT (Liu et al., 2023b) 53.4 64.0 70.3 84.8 61.4 68.5 70.1 58.9 66.4
XoT (Liu et al., 2023b) 71.3 83.6 84.7 97.8 97.6 94.5 89.4 83.0 87.7
Wrong-of-Thought **76.2** **85.2** **89.6** **99.0** **99.0** **96.1** **93.2** **86.7** **90.6**
Table 1: Experimental results of Acc. (%) on eight datasets and five LLMs. Bold represents the best performance.
ing a significant improvement. The GSM-Hard
dataset, a mathematical reasoning dataset where
small numerical values are replaced with large ones
(average result: 7.3e9), demonstrates the strong
performance of WoT in difficult reasoning tasks.
**4.3** **WoT Analysis**
To gain a more profound understanding of WoT, we
propose the following research questions based on
experiments on GPT-3.5-Turbo (OpenAI, 2022):
(1) Can Wrong Information Utilization lead to
_performance improvement?_
(2) Can Multi-Perspective Verification lead to
_more accurate judgment results?_
(3) Can WoT reduce the number of required rea_soning steps?_
(4) Why does WoT have strong capabilities in dif_ficult mathematical reasoning?_
(5) What is the intuition behind WoT?
**4.3.1** **Answer 1: Wrong Information**
**_Utilization can boost performance_**
To intuitively verify the performance improvements
brought by using wrong information, we select PoT,
EoT, and CoT that utilized wrong information from
the GSM8K dataset for evaluation. We compare
their performance with and without wrong information. Additionally, we test the WoT performance
without the Wrong Information Utilization. Due
to the limitation within the WoT, EoT and PoT can
only collect incorrect information once, resulting
in a single wrong example. On the other hand, CoT
can collect incorrect information up to two times,
resulting in two wrong examples.
The results are shown in the Figure 5. After
incorporating wrong information from the previous step, EoT and PoT improved by 8% and 8.9%,
respectively. We can observe that CoT, which utilized additional wrong information from the previous two steps, improved by 13.1%. Furthermore,
as shown in Table 2, the WoT framework without
-----
Methods GSM-hard GSM8K Algebra MultiArith SingleEQ SingleOP AddSub SVAMP AVG
Wrong-of-Though 76.2 85.2 89.6 99.0 99.0 96.1 93.2 86.7 90.6
_w/o WIU_ 73.9 84.0 87.8 98.8 98.4 95.9 92.6 85.5 89.6 (-1.0)
_w/o MPV_ 73.1 82.4 87.4 98.3 98.6 94.5 90.4 85.6 88.8 (-1.8)
_w/o WIU & MPV_ 71.3 83.6 84.7 97.8 97.6 94.5 89.4 83.0 87.7 (-2.9)
Table 2: Ablation experiment on GPT-3.5-Turbo. “w/o WIU” refers to removing Wrong Information Utilization
(WIU). “w/o MPV” refers to removing Multi-Perspective Verification (MPV). “w/o WIU & MPV” refers to removing
both Wrong Information Utilization and Multi-Perspective Verification.
|Col1|EoT|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|Col12|Col13|Col14|Col15|Col16|Col17|Col18|PoT|Col20|Col21|Col22|Col23|Col24|Col25|Col26|Col27|Col28|Col29|Col30|Col31|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|91 (%) 88 Performance 85 82 79 Assertion Process Result Voting|||||||||||||||||||||||||||||||
||||||||||||||||||||||||||||||||
||||||||||||||||||||||||||||||||
||||||||||||||||||||||||||||||||
||||||||||||||||||||||||||||||||
||||||||||||||||||||||||||||||||
||||||||||||||||||||||||||||||||
||||||||||||||||||||||||||||||||
||||||||||||||||||||||||||||||||
||Acc. F1 82.2 85.2 86.3 89.7 84.8 88.0 88.2 90.9|||||||||||||||||Acc. F1 79.8 87.6 81.2 89.0 84.2 91.1 84.8 91.4|||||||||||||
Figure 4: Performance comparison results from various verification perspectives. “Voting” represents the final
judgment after voting from the three perspectives.
|Col1|w/o wrong information|Col3|w/ wrong information|Col5|
|---|---|---|---|---|
||w/o wrong information||w/ wrong information||
|75.4 73.2 64.3 62.3 46.0 38.0||||64.9 54.9|
|Col1|64.9|
|---|---|
|||
|||
EoT PoT CoT AVG
Figure 5: Comparison of performance without utilizing
wrong reasoning information and with integrated wrong
reasoning information.
_Wrong Information Utilization exhibits a perfor-_
mance decrease across all datasets, with an average
reduction of 1.0%. This demonstrates that incorporating wrong information can boost the reasoning
performance of the LLMs, and more significant
improvements can be achieved by utilizing more
additional wrong reasoning information.
**4.3.2** **Answer 2: Multi-Perspective Verification**
**can lead to more accurate judgments**
To demonstrate that Multi-Perspective Verification
can accurately judge the results generated by EoT
and PoT, we directly evaluated the performance
of the three perspectives and the final voting re
sults of the three perspectives. For accurate assessment, we use accuracy (Acc.) and F1 score (F1)
as evaluation metrics. Additionally, we evaluate
the performance of the WoT framework without
_Multi-Perspective Verification to demonstrate the_
effectiveness of Multi-Perspective Verification.
The results are shown in Figure 4. We can directly observe that our proposed Process Verification and Result Verification outperform the Assertion Verification used in XoT with respect to accuracy and F1 score. Furthermore, the final Voting
Verification further improves the accuracy. For EoT,
Acc and F1 improved by 6% and 5.7%, respectively,
while for PoT, they improved by 5% and 3.8%, respectively. Additionally, as shown in Table 2, the
performance of WoT decreased by an average of
1.8% after the removal of Multi-Perspective Ver_ification. This demonstrates the effectiveness of_
_Multi-Perspective Verification, bringing significant_
benefits to overall performance improvement.
**4.3.3** **Answer 3: WoT can effectively minimize**
**the reasoning steps needed**
To compare the reasoning steps required by XoT
and WoT in solving mathematical questions, we
conduct experiments and record the average reasoning steps needed. As shown in Figure 6, the
results indicate that WoT significantly reduces the
-----
GSM-hard
MultiArith
SingleOP
SingleEQ
Algebra
AddSub
SVAMP
GSM8K
AVG
Method Acc.
**6%**
CoT 42.2 **21%**
PoT 70.3
**48%** **31%**
EoT 53.4 **63%**
**31%**
XoT 71.3
**WoT** **76.2**
Results on GSM-hard XoT WoT
Figure 7: The proportion of reasoning methods ultimately used to solve questions by XoT and WoT on the
GSM-hard dataset.
**4.3.5** **Answer 5: Qualitative analysis**
To better comprehend WoT, we introduce a realworld example for qualitative analysis. As illustrated in Figure 8 (a), upon receiving a question,
XoT selects EoT for reasoning. However, due to
the limited reasoning capability of EoT, an incorrect result of “8” was generated. During Assertion
Verification, this incorrect result was mistakenly
identified as correct. As XoT relied solely on Assertion Verification, it erroneously output “8” as
the final result. This example clearly illustrates the
limitations of the single verification method and its
adverse impact on reasoning accuracy.
In contrast, as shown in Figure 8 (b), WoT, when
presented with the same question, initially also arrives at the incorrect answer “8”. However, both
Process Verification and Result Verification identified “8” as incorrect. Consequently, the system
switches to PoT for the next reasoning step. In
PoT reasoning, after being warned with a wrong
example, PoT generates the correct reasoning and
arrives at the correct result, “2”. This result then
passed verification from all three perspectives, ultimately confirming the correct answer, “2”. This
case further demonstrates the effectiveness of WoT,
as combining three verification perspectives and
utilizing wrong reasoning information significantly
enhances reasoning capability.
**5** **Related Work**
The rapid advancement of LLMs in recent years has
introduced new opportunities in natural language
processing (OpenAI, 2022; Team et al., 2023; Qin
et al., 2024a,b). Particularly, the introduction of
Chain-of-Thought (CoT) (Wei et al., 2022) opens
a novel direction in this domain, attracting many
researchers (Zhang et al., 2022; Fei et al., 2023,
2024; Zhang et al., 2024b; Xu et al., 2024a; Chen
et al., 2024a). Specifically, Wei et al. (2022) propose using manually constructed CoT demonstra
|XoT WoT|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|
|---|---|---|---|---|---|---|---|---|---|
||||1. 1.24||27|1.4|1. 5|57 1. 57|67|
||1.04|||||||||
|||||||||||
||||1.22 1.||27|||||
|||||||||||
|||1.10||||||||
|||||||||||
|||||||||||
|||||||6|1.54 1. 47 47|||
|||||||||||
|||||||||||
||||||5|||||
|||||||||||
|||||||||||
|||||||1. 1.||||
|||||||||||
|||||||||||
|||||||||||
|||||||||||
|||||||||||
|||||1.23||||||
1.00 1.10 1.20 1.30 1.40 1.50 1.60 1.70
Steps
Figure 6: Comparison of the average reasoning steps
required by XoT and WoT in solving questions.
reasoning steps in each dataset, with an average reduction of 8% steps. This indirectly demonstrates
the effectiveness of Multi-Perspective Verification,
and Wrong Information Utilization in WoT. Accurate verification and efficient reasoning can effectively reduce the number of required reasoning
steps, thereby enhancing reasoning efficiency.
**4.3.4** **Answer 4: Tips for solving difficult**
**mathematical questions with WoT**
To delve deeper into the reasons behind the significant performance improvement of WoT in solving
reasoning challenges, we conduct a detailed analysis in this section. In the GSM-hard dataset, we
extract the proportions of the methods ultimately
used for reasoning, as shown in Figure 7. Our
analysis reveals notable changes in the reasoning
method proportions between XoT and WoT: the proportion of CoT decreased from 21% → 6%, while
the proportion of PoT increased from 48% → 63%.
This change reflects the advantage of WoT in
reasoning strategies. The numerical values in the
GSM-hard dataset are usually large, often involving more than 10 digits. Because CoT reasoning
has lower accuracy when handling large number
value calculations, with an accuracy rate of only
42.2%. Since XoT relies more on CoT for reasoning, it results in lower accuracy. In contrast,
WoT introduces a multiple perspectives verification
mechanism, enabling more accurate judgment of
reasoning results. Consequently, WoT more frequently adopts PoT for reasoning, thereby avoiding
errors associated with CoT, and achieving significant overall improvement.
-----
(a) XoT (b) Wrong-of-Thought (WoT)
**Question: Fred has 5 cards. Lisa...** **Question: Fred has 5 cards. Lisa...** **Question: Fred has 5 cards. Lisa...**
**EoT** **EoT** **_Wrong Information_** **PoT**
**Solver** **Solver** **Solver**
**EoT Reasoning** **EoT Reasoning** **PoT Reasoning**
fred_cards = x \n lisa_cards = 3… fred_cards = x \n lisa_cards = 3… fred_cards = 5 \n lisa_cards = 3…
**Answer: 8** **Answer: 8** **Answer: 2**
Assertion Process Result Assertion Process Result
Assertion
**Answer** **8** **Answer** **2**
Figure 8: The case study. Figure (a) shows XoT reasoning, where it initially outputs an incorrect answer, “8”.
Assertion Verification mistakenly validated this as correct, resulting in the final wrong output of “8”. Figure (b)
shows WoT reasoning. EoT first outputs an incorrect answer, which was identified as wrong by Process and Result
Verification, switching to PoT. Using the wrong reasoning of EoT, PoT arrived at the correct answer, “2”. All three
verification methods then confirmed this result, leading to the correct output of “2”.
tions to enhance LLMs performance. Additionally, Chen et al. (2022) introduce the Programof-Thoughts (PoT), enabling LLMs to generate
Python programs to solve mathematical problems.
Liu et al. (2023b) propose Equation-of-Thoughts
(EoT), allowing LLMs to generate mathematical
equations and then use external calculators to compute the results, offering a new perspective on
problem-solving with LLMs. Chen et al. (2024b)
propose a framework that iteratively explores and
self-evaluates trees of thoughts, allowing LLMs to
learn from trial and error and improve the quality
of final answers. Xu et al. (2024b) propose transitioning LLMs from passive to active learning,
thus enhancing their problem-solving capabilities.
Zhou et al. (2024) present a method for LLMs to
improve self-criticism and self-discovery, thereby
forming explicit structures to enhance reasoning
performance. Chen et al. (2023) propose using error code to implement code self-debug and improve
the code generation capability of LLMs.
In the realm of nonlinear problem solving, Yao
et al. (2023) introduce the Tree-of-Thoughts (ToT)
framework, enabling LLMs to generate multiple
reasoning paths to tackle mathematical reasoning
tasks. Sel et al. (2023) propose the Algorithm-ofThoughts (AoT), which not only generates multiple paths but also selects the optimal nodes, allowing for the repeated utilization of reasoning
pathways. Besta et al. (2024) introduce Graphof-Thoughts (GoT), a framework that models the
information generated by LLMs as arbitrary graphs,
enabling the synergistic integration of all reasoning
processes. Ning et al. (2024) propose Skeleton-ofThought (SoT), which first generates the skeleton
of the answer and then utilizes LLMs for batched
resolution, enhancing inference efficiency. Liu et al.
(2023b) propose XoT, which integrates multiple
reasoning thoughts and utilizes single assertion
verification to decide whether to switch reasoning
methods, achieving impressive results.
Compared to previous research, WoT employs
multiple perspectives of verification while incorporating wrong information utilization. This greatly
and effectively enhances overall reasoning performance. To our knowledge, this work is the first
to incorporate Multi-Perspective Verification and
_Wrong Information Utilization within the continu-_
ously verifying and iterative framework.
**6** **Conclusion**
In this work, we propose WoT, a framework that optimizes outputs by utilizing wrong information and
multi-perspective verification. WoT comprises two
core modules: Multi-Perspective Verification and
_Wrong Information Utilization. WoT achieves more_
accurate reasoning thought switching and utilizes
wrong reasoning information. Extensive evaluations on eight datasets and five models demonstrate
that WoT achieves superior performance. Furthermore, WoT exhibits powerful capabilities in difficult computation tasks.
-----
**Limitations**
This work proposes a WoT framework to enhance
verifying iteratively generated reasoning answers
by Multi-Perspective Verification and Wrong Infor_mation Utilization. However, in our work, since_
“Assertion Verification” requires reliance on external rule executors, how to verify natural languagebased CoT through assertions remains a question
worthy of future research. Secondly, our verification method primarily validates the logical correctness of the model. Verifying the clarity and quality
of the logical expression might further enhance
the effectiveness of model reasoning. Finally, WoT
may spend more tokens due to the incorporation of
three verification perspectives and wrong reasoning
information. We hope future work develops more
efficient methods to address this challenge.
**Acknowledgments**
This work was supported by the National Natural Science Foundation of China (NSFC) via
grant 62306342. This work was also sponsored
by the Excellent Young Scientists Fund in Hunan
Province (2024JJ4070) and the Science and Technology Innovation Program of Hunan Province under Grant 2024RC3024. This work was supported
by the Key Laboratory of Computing Power Network and Information Security, Ministry of Education under Grant No.2023ZD032. We are grateful
for resources from the High Performance Computing Center of Central South University.
**References**
Josh Achiam, Steven Adler, and Sandhini et al. Agarwal.
[2023. Gpt-4 technical report. abs/2303.08774.](https://arxiv.org/abs/2303.08774)
[Jinze Bai, Shuai Bai, and Yunfei Chu et al. 2023. Qwen](https://arxiv.org/abs/2309.16609)
[technical report. ArXiv preprint, abs/2309.16609.](https://arxiv.org/abs/2309.16609)
Maciej Besta, Nils Blach, and Ales et al. Kubicek.
2024. Graph of thoughts: Solving elaborate problems with large language models. In Proceedings
_of the AAAI Conference on Artificial Intelligence,_
volume 38, pages 17682–17690.
Qiguang Chen, Libo Qin, Jin Zhang, Zhi Chen, Xiao
[Xu, and Wanxiang Che. 2024a. M[3]CoT: A novel](https://doi.org/10.18653/v1/2024.acl-long.446)
[benchmark for multi-domain multi-step multi-modal](https://doi.org/10.18653/v1/2024.acl-long.446)
[chain-of-thought. In Proceedings of the 62nd Annual](https://doi.org/10.18653/v1/2024.acl-long.446)
_Meeting of the Association for Computational Lin-_
_guistics (Volume 1: Long Papers), pages 8199–8221._
[Sijia Chen, Baochun Li, and Di Niu. 2024b. Boosting](https://openreview.net/forum?id=qBL04XXex6)
[of thoughts: Trial-and-error problem solving with](https://openreview.net/forum?id=qBL04XXex6)
[large language models. In The Twelfth International](https://openreview.net/forum?id=qBL04XXex6)
_Conference on Learning Representations._
Wenhu Chen, Xueguang Ma, Xinyi Wang, and
William W Cohen. 2022. Program of thoughts
prompting: Disentangling computation from reasoning for numerical reasoning tasks. Transactions on
_Machine Learning Research._
Xinyun Chen, Maxwell Lin, Nathanael Schaerli, and
Denny Zhou. 2023. Teaching large language models
to self-debug. In The 61st Annual Meeting Of The
_Association For Computational Linguistics._
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian,
Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias
Plappert, Jerry Tworek, Jacob Hilton, Reiichiro
[Nakano, et al. 2021. Training verifiers to solve math](https://arxiv.org/abs/2110.14168)
[word problems. ArXiv preprint, abs/2110.14168.](https://arxiv.org/abs/2110.14168)
Hao Fei, Bobo Li, Qian Liu, Lidong Bing, Fei Li, and
Tat-Seng Chua. 2023. Reasoning implicit sentiment
with chain-of-thought prompting. In Proceedings
_of the 61st Annual Meeting of the Association for_
_Computational Linguistics (Short Papers)._
Hao Fei, Shengqiong Wu, Wei Ji, Hanwang Zhang,
Meishan Zhang, Mong-Li Lee, and Wynne Hsu. 2024.
Video-of-thought: Step-by-step video reasoning from
perception to cognition. In Forty-first International
_Conference on Machine Learning._
Luyu Gao, Aman Madaan, and Shuyan et al. Zhou.
2023. Pal: Program-aided language models. In In_ternational Conference on Machine Learning, pages_
10764–10799. PMLR.
Joy He-Yueya, Gabriel Poesia, Rose E Wang, and
[Noah D Goodman. 2023. Solving math word prob-](https://arxiv.org/abs/2304.09102)
[lems by combining language models with symbolic](https://arxiv.org/abs/2304.09102)
[solvers. ArXiv preprint, abs/2304.09102.](https://arxiv.org/abs/2304.09102)
Mohammad Javad Hosseini, Hannaneh Hajishirzi, Oren
[Etzioni, and Nate Kushman. 2014. Learning to solve](https://doi.org/10.3115/v1/D14-1058)
[arithmetic word problems with verb categorization.](https://doi.org/10.3115/v1/D14-1058)
In Proceedings of the 2014 Conference on Empirical
_Methods in Natural Language Processing (EMNLP),_
pages 523–533.
Albert Q Jiang, Alexandre Sablayrolles, and Arthur
[et al. Mensch. 2023. Mistral 7b. ArXiv preprint,](https://arxiv.org/abs/2310.06825)
abs/2310.06825.
Rik Koncel-Kedziorski, Hannaneh Hajishirzi, Ashish
Sabharwal, Oren Etzioni, and Siena Dumas Ang.
[2015. Parsing algebraic word problems into equa-](https://doi.org/10.1162/tacl_a_00160)
[tions. Transactions of the Association for Computa-](https://doi.org/10.1162/tacl_a_00160)
_tional Linguistics, 3:585–597._
Hanmeng Liu, Zhiyang Teng, Leyang Cui, Chaoli
Zhang, Qiji Zhou, and Yue Zhang. 2023a. Logicot:
Logical chain-of-thought instruction tuning. In The
_2023 Conference on Empirical Methods in Natural_
_Language Processing._
-----
Tengxiao Liu, Qipeng Guo, Yuqing Yang, Xiangkun Hu,
Yue Zhang, Xipeng Qiu, and Zheng Zhang. 2023b.
[Plan, verify and switch: Integrated reasoning with](https://doi.org/10.18653/v1/2023.emnlp-main.169)
[diverse X-of-thoughts. In Proceedings of the 2023](https://doi.org/10.18653/v1/2023.emnlp-main.169)
_Conference on Empirical Methods in Natural Lan-_
_guage Processing, pages 2807–2822._
Aman Madaan, Niket Tandon, and Prakhar et al. Gupta.
2023. Self-refine: Iterative refinement with selffeedback. Advances in Neural Information Process_ing Systems, 36._
Xuefei Ning, Zinan Lin, and Zixuan et al. Zhou. 2024.
Skeleton-of-thought: Prompting llms for efficient parallel generation. In The Twelfth International Confer_ence on Learning Representations._
[OpenAI. 2022. Chatgpt.](https://platform.openai.com/docs/guides/text-generation/chat-completions-api)
Arkil Patel, Satwik Bhattamishra, and Navin Goyal.
[2021. Are NLP models really able to solve simple](https://doi.org/10.18653/v1/2021.naacl-main.168)
[math word problems? In Proceedings of the 2021](https://doi.org/10.18653/v1/2021.naacl-main.168)
_Conference of the North American Chapter of the_
_Association for Computational Linguistics: Human_
_Language Technologies, pages 2080–2094._
Jingyuan Qi, Zhiyang Xu, Ying Shen, and et al. 2023.
The art of socratic questioning: Recursive thinking
with large language models. In Proceedings of the
_2023 Conference on Empirical Methods in Natural_
_Language Processing, pages 4177–4199._
Libo Qin, Qiguang Chen, Xiachong Feng, Yang Wu,
Yongheng Zhang, Yinghui Li, Min Li, Wanxiang
[Che, and Philip S Yu. 2024a. Large language models](https://arxiv.org/abs/2405.12819)
[meet nlp: A survey. ArXiv preprint, abs/2405.12819.](https://arxiv.org/abs/2405.12819)
Libo Qin, Qiguang Chen, Fuxuan Wei, Shijue Huang,
and Wanxiang Che. 2023. Cross-lingual prompting: Improving zero-shot chain-of-thought reasoning
across languages. In Proceedings of the 2023 Con_ference on Empirical Methods in Natural Language_
_Processing, pages 2695–2709._
Libo Qin, Qiguang Chen, Yuhang Zhou, Zhi Chen,
Yinghui Li, Lizi Liao, Min Li, Wanxiang Che, and
Philip S Yu. 2024b. [Multilingual large language](https://arxiv.org/abs/2404.04925)
[model: A survey of resources, taxonomy and fron-](https://arxiv.org/abs/2404.04925)
[tiers. ArXiv preprint, abs/2404.04925.](https://arxiv.org/abs/2404.04925)
[Subhro Roy and Dan Roth. 2015. Solving general arith-](https://doi.org/10.18653/v1/D15-1202)
[metic word problems. In Proceedings of the 2015](https://doi.org/10.18653/v1/D15-1202)
_Conference on Empirical Methods in Natural Lan-_
_guage Processing, pages 1743–1752._
[Subhro Roy, Tim Vieira, and Dan Roth. 2015. Reason-](https://doi.org/10.1162/tacl_a_00118)
[ing about quantities in natural language. Transac-](https://doi.org/10.1162/tacl_a_00118)
_tions of the Association for Computational Linguis-_
_tics, Volume 3:1–13._
Bilgehan Sel, Ahmad Tawaha, and Vanshaj et al. Khattar.
2023. Algorithm of thoughts: Enhancing exploration
of ideas in large language models. In Forty-first
_International Conference on Machine Learning._
Gemini Team, Rohan Anil, Sebastian Borgeaud,
Yonghui Wu, Jean-Baptiste Alayrac, Jiahui Yu,
Radu Soricut, Johan Schalkwyk, Andrew M Dai,
Anja Hauth, et al. 2023. [Gemini: a family of](https://arxiv.org/abs/2312.11805)
[highly capable multimodal models. ArXiv preprint,](https://arxiv.org/abs/2312.11805)
abs/2312.11805.
Yongqi Tong, Dawei Li, and Sizhe et al. Wang. 2024.
[Can llms learn from previous mistakes? investigating](https://arxiv.org/abs/2403.20046)
[llms’ errors to boost for reasoning. ArXiv preprint,](https://arxiv.org/abs/2403.20046)
abs/2403.20046.
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay
Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti
Bhosale, et al. 2023. Llama 2: [Open founda-](https://arxiv.org/abs/2307.09288)
[tion and fine-tuned chat models.](https://arxiv.org/abs/2307.09288) _ArXiv preprint,_
abs/2307.09288.
Jason Wei, Xuezhi Wang, and Dale et al. Schuurmans.
2022. Chain-of-thought prompting elicits reasoning
in large language models. Advances in neural infor_mation processing systems, 35:24824–24837._
Jundong Xu, Hao Fei, Liangming Pan, Qian Liu, Mong[Li Lee, and Wynne Hsu. 2024a. Faithful logical](https://doi.org/10.18653/v1/2024.acl-long.720)
[reasoning via symbolic chain-of-thought. In Pro-](https://doi.org/10.18653/v1/2024.acl-long.720)
_ceedings of the 62nd Annual Meeting of the Associa-_
_tion for Computational Linguistics (Volume 1: Long_
_Papers), pages 13326–13365._
Zhipeng Xu, Zhenghao Liu, Yibin Liu, and et al. 2024b.
[ActiveRAG: Revealing the treasures of knowledge](https://arxiv.org/abs/2402.13547)
[via active learning. ArXiv preprint, abs/2402.13547.](https://arxiv.org/abs/2402.13547)
Shunyu Yao, Dian Yu, and Jeffrey et al. Zhao. 2023.
[Tree of thoughts: Deliberate problem solving with](https://proceedings.neurips.cc/paper_files/paper/2023/file/271db9922b8d1f4dd7aaef84ed5ac703-Paper-Conference.pdf)
[large language models. In Advances in Neural Infor-](https://proceedings.neurips.cc/paper_files/paper/2023/file/271db9922b8d1f4dd7aaef84ed5ac703-Paper-Conference.pdf)
_mation Processing Systems, volume 36, pages 11809–_
11822. Curran Associates, Inc.
Tianjun Zhang, Aman Madaan, Luyu Gao, Steven
Zhang, Swaroop Mishra, Yiming Yang, Niket Tandon, and Uri Alon. 2024a. In-context principle learning from mistakes. In ICML 2024 Workshop on In_Context Learning._
Yongheng Zhang, Qiguang Chen, Min Li, Wanxiang
[Che, and Libo Qin. 2024b. Autocap: Towards auto-](https://doi.org/10.18653/v1/2024.findings-acl.546)
[matic cross-lingual alignment planning for zero-shot](https://doi.org/10.18653/v1/2024.findings-acl.546)
[chain-of-thought. pages 9191–9200.](https://doi.org/10.18653/v1/2024.findings-acl.546)
Zhuosheng Zhang, Aston Zhang, Mu Li, and Alex
Smola. 2022. Automatic chain of thought prompting
in large language models. In The Eleventh Interna_tional Conference on Learning Representations._
Chuanyang Zheng, Zhengying Liu, Enze Xie, Zhenguo
[Li, and Yu Li. 2024. Progressive-hint prompting](https://openreview.net/forum?id=UkFEs3ciz8)
[improves reasoning in large language models. In AI](https://openreview.net/forum?id=UkFEs3ciz8)
_for Math Workshop @ ICML 2024._
Pei Zhou, Jay Pujara, Xiang Ren, Xinyun Chen, HengTze Cheng, Quoc V Le, Ed H Chi, Denny Zhou, Swaroop Mishra, and Huaixiu Steven Zheng. 2024. Selfdiscover: Large language models self-compose reasoning structures. arXiv preprint arXiv:2402.03620.
-----
| [
"Yongheng, Zhang",
"Qiguang, Chen",
"Libo, Qin",
"Jingxuan, Zhou",
"Peng, Wang",
"Jiasheng, Si",
"Jin, Wang",
"Wenpeng, Lu"
] | 2024-10-06T00:00:00 | null | false | 0 | 0 | null | https://arxiv.org/abs/2410.04463v1 | https://arxiv.org/abs/2410.04463 | https://www.semanticscholar.org/paper/33f87600bcb0dee2422500e533ad59ebc26cae78 |
Wu’s Method Boosts Symbolic AI to Rival Silver Medalists and AlphaGeometry to Outperform Gold Medalists at IMO Geometry | Proving geometric theorems constitutes a hallmark of reasoning combining intuitive, visual, and logical skills, making automated theorem proving of Olympiad-level geometry problems a milestone for human-level automated reasoning. AlphaGeometry, a neuro-symbolic model trained with 100M synthetic samples, solved 25 of 30 International Mathematical Olympiad (IMO) problems. It marked a major breakthrough compared to the reported baseline using Wu's method which solved only 10. Revisiting the IMO-AG-30 benchmark, we find that Wu's method is surprisingly strong and solves 15 problems, including some unsolved by other methods. This leads to two key findings: (i) Combining Wu's method with the classic synthetic methods of deductive databases and angle, ratio & distance chasing solves 21 out of 30 problems on a CPU-only laptop limited to 5 minutes per problem. Essentially, this classic method solves just 4 fewer problems than AlphaGeometry and establishes the first *fully symbolic* baseline that rivals the performance of IMO silver medalists. (ii) Wu's method even solves 2 of the 5 problems that AlphaGeometry failed on. Combining both, we set a new state-of-the-art for automated theorem proving on IMO-AG-30 solving 27 out of 30 problems - the first AI method which outperforms an IMO gold medalist. | null | # Wu’s Method Boosts Symbolic AI to Rival Silver Medalists and AlphaGeometry to Outperform Gold Medalists at IMO Geometry
**Anonymous Author(s)**
Affiliation
Address
```
email
```
**Abstract**
1 Proving geometric theorems constitutes a hallmark of reasoning, combining in
2 tuitive, visual, and logical skills. This makes automated theorem proving of
3 Olympiad-level geometry problems a milestone for human-level automated rea
4 soning. AlphaGeometry, a neuro-symbolic model trained with 100M synthetic
5 samples, solved 25 of 30 International Mathematical Olympiad (IMO) problems.
6 It marked a major breakthrough compared to the reported baseline using Wu’s
7 method which solved only 10. Revisiting the IMO-AG-30 benchmark, we find
8 that Wu’s method is surprisingly strong and solves 15 problems, including some
9 unsolved by other methods. This leads to two key findings: (i) Combining Wu’s
10 method with the classic synthetic methods of deductive databases and angle, ratio
11 & distance chasing solves 21 out of 30 problems on a CPU-only laptop limited
12 to 5 minutes per problem. Essentially, this classic method solves just 4 fewer
13 problems than AlphaGeometry and establishes the first fully symbolic baseline that
14 rivals the performance of IMO silver medalists. (ii) Wu’s method even solves 2
15 of the 5 problems that AlphaGeometry failed on. Combining both, we set a new
16 state-of-the-art for automated theorem proving on IMO-AG-30 solving 27 out of
17 30 problems – the first AI method which outperforms an IMO gold medalist.
18 **1** **Introduction**
19 Automated theorem proving has been the long-term goal of developing computer programs that can
20 match the conjecturing and proving capabilities demanded by mathematical research [10]. This field
21 has recognized solving Olympiad-level geometry problems as a key milestone [2, 3], marking a
22 frontier of computers to perform complex mathematical reasoning. The International Mathematical
23 Olympiad (IMO) started in 1959 and hosts the most reputed theorem-proving competitions in the
24 world that play an important role in identifying exceptional talents in problem solving. In fact, half of
25 all Fields medalists participated in the IMO in their youth, and matching top human performances at
26 the olympiad level has become a notable milestone of AI research.
27 Euclidean geometry is well suited to testing the reasoning skills of AI systems. It is finitely ax
28 iomatized [14] and many proof systems for Euclidean geometry have been proposed over the years
29 which are amenable to automated theorem proving techniques [4, 5]. Furthermore, proof search can
30 be guided by diagrammatic representations [12, 17], probabilistic verification [11, 21], and a vast
31 array of possible deductions using human-designed heuristics for properties like angles, areas, and
32 distances, methods affectionately called “trig bashing” and “bary bashing” [22, 23] by International
33 Mathematical Olympiad (IMO) participants. In addition, this domain is challenging — specific
34 proof systems need to be defined for specifying the problem, there is a shortage of data to train
35 from, and problems typically contain ambiguities around degenerate cases [27, 19, 16] that are
Submitted to 38th Conference on Neural Information Processing Systems (NeurIPS 2024). Do not distribute.
-----
36 complex to resolve and have led to the humorous folklore that “geometry problems never take care of
37 _degeneracies”._
38 Automated reasoning in geometry can be categorized into algebraic [26, 25, 15] and synthetic methods
39 [12, 6, 20]. Recent focus has been on synthetic methods like Deductive Databases (DD) [6] that
40 mimic human-like proving techniques and produce intelligible proofs by treating the problem of
41 theorem proving as a step-by-step search problem using a set of geometry axioms. For instance, DD
42 uses a fixed set of expert-curated geometric rules which are applied repeatedly to an initial geometric
43 configuration. This is performed until the system reaches a fixpoint and no new facts can be deduced
44 using the available rules. AlphaGeometry [24], a novel neuro-symbolic prover, represents a recent
45 breakthrough advancement in this area. It adds additional rules to the prior work of DD to perform
46 angle, ratio, and distance chasing (AR), and the resulting symbolic engine (DD+AR) is further
47 enhanced using constructions suggested by a large language model (DD+AR+LLM-Constructions)
48 trained on 100 million synthetic proofs. It has outclassed algebraic methods by solving 25 of 30 IMO
49 problems, whereas the reported baseline based on Wu’s method [26, 8] solved only ten [24].
50 Algebraic methods, such as Wu’s method and the Gröbner basis method [15], transform geometric
51 hypotheses into system of polynomials to verify conclusions. They offer powerful procedures that
52 are proven to decide statements in broad classes of geometry [8, 15]. More precisely, Wu’s method
53 possesses the capability to address any problem for which the hypotheses and conclusion can be
54 expressed using algebraic equations [7], while simultaneously generating non-degeneracy conditions
55 automatically [27, 16]. This remarkable feature implies that Wu’s method can handle problems not
56 only in plane geometry but also in solid and higher-dimensional geometries, i.e. in areas where
57 synthetic methods can be used only with great effort and additional considerations. [9].
58 Rather than indiscriminately tackling arbitrary problem instances, mathematicians concentrate their
59 efforts on statements exhibiting specific properties that render them interesting, meaningful, and
60 tractable within the broader context of mathematical inquiry [13]. In this work, we put the capabilities
61 of Wu’s method to the test on such structured problems and re-evaluate Wu’s Method on the IMO-AG
62 30 benchmark introduced by Trinh et al. [24]. We find that it performs surprisingly strong, solving 15
63 problems, some of which are not solved by any of the other methods. This leads to two key findings:
64 - Combining Wu’s method (Wu) with the classic synthetic methods of deductive databases
65 (DD) and angle, ratio, and distance chasing (AR) solves 21 out of 30 methods by just using a
66 CPU-only laptop with a time limit of 5 minutes per problem. Essentially, this classic method
67 (Wu&DD+AR) solves just 4 problems less than AlphaGeometry and establishes the first
68 _fully symbolic baseline, strong enough to rival the performance of an IMO silver medalist._
69 - Wu’s method even solves 2 of the 5 problems that AlphaGeometry (AG) failed to solve. Thus,
70 by combining AlphaGeometry with Wu’s method (Wu&AG) we set a new state-of-the-art
71 for automated theorem proving on IMO-AG-30, solving 27 out of 30 problems, the first AI
72 method which outperforms an IMO gold medalist.
73 **2** **Experiments & Results**
74 **2.1** **Dataset**
75 In January 2024, IMO-AG-30 was introduced as a new benchmark by Trinh et al. [24] to demonstrate
76 the skill level of AlphaGeometry. IMO-AG-30 is based on geometry problems collected from the IMO
77 competitions since 2000 and adapted to a narrower, specialized environment for classical geometry
78 used in interactive graphical proof assistants, resulting in a test set of 30 classical geometry problems.
79 The number of problems solved in this benchmark are related to the number of problems solved on
80 average by IMO contestants. As indicated by the gray horizontal lines in Figure 1 (A), bronze, silver
81 and gold medalists on average solved 19.3, 22.9 and 25.9 of these problems, and 15.2 represents the
82 average over all contestants. The specific set of problems that have been collected for IMO-AG-30
83 are listed in the left column of the diagram in Figure 1 (B).
84 **2.2** **Experimental Details**
85 We evaluated performance using the IMO-AG-30 benchmark, with baselines and dataset all adopted
86 from Trinh et al. [24]. We only re-implemented Wu’s Method through the JGEX software [18, 28] by
-----
Figure 1: A) Performance across symbolic and LLM-Augmented methods on the IMO-AG-30
problem set, along with human performance. We set a strong baseline among symbolic systems at the
standard of a silver medalist and outperform a gold medalist by a margin of one problem. B) Diagram
showing how the different methods overlap or complement each other on the IMO-AG-30 problems.
87 manual translation of the IMO-AG-30 problems into JGEX-compatible format[1]. We also successfully
88 reproduced the DD+AR baseline, necessary for our final proposed method from the AlphaGeometry
89 codebase. We manually verified that the hypothesis and conclusion equations generated by JGEX for
90 several problems translated by us were indeed correct.
91 **2.3** **Results and Analysis**
92 Our findings, are displayed in Figure 1 in combination with previous results from [24]. Figure 1
93 (A) compares the number of problems solved, and (B) shows which problems are solved by which
94 method to visualize how the different methods overlap or complement each other. In Figure 1 (A),
95 the performance levels of IMO contestants are indicated by gray horizontal lines, showing gold,
96 silver, bronze, average, and honorable mention level. The performance levels of synthetic symbolic
97 methods are displayed with blue bars and of LLM-augmented neurosymbolic methods are shown with
98 green bars. Our own results obtained with Wu’s method fall into the category of algebraic synthetic
99 methods shown with orange bars. All results for synthetic symbolic methods (blue) or neurosymbolic
100 LLM-augmented methods (green) are adopted from Trinh et al. [24].
101 Our combination of Wu’s method with DD+AR sets a new symbolic baseline (Wu&DD+AR) that
102 outperforms all traditional methods by a margin of 6 problems. It solves 21 of the IMO-AG-30
103 problems, matching the level of AlphaGeometry without fine-tuning (FT-9M only) shown in the
104 Appendix (Figure 2). Wu’s method achieves this performance with remarkably low computational
105 requirements. On a laptop equipped with an AMD Ryzen 7 5800H processor and 16 GB of RAM,
106 we were able to solve 14 out of 15 problems within 5 seconds. One problem (2015 P4) required 3
107 minutes. In our experiments, Wu’s method either solves problems almost immediately or the laptop
108 runs out of memory within 5 minutes. Remarkably, two of the fifteen problems we were able to
109 solve with Wu’s method (2021 P3, 2008 P1B) were among the five problems that were too difficult
110 to solve for AlphaGeometry. Thus, by simple ensemble combination between Wu’s method and
111 AlphaGeometry, we obtain the new state-of-the-art solving 27 out of 30 problems on the IMO-AG-30
112 benchmark as visualized by the green/orange bar (Wu&AG) Figure 1.
1However, 4 out of 30 problems were untranslatable due to lack of appropriate constructions within the JGEX
framework, hence our reported is out of 26 problems.
-----
113 **3** **Conclusion**
114 Overall, our note highlights the potential of algebraic methods in automated geometric reasoning
115 for solving International Mathematical Olympiad (IMO) geometry problems[2], raising the number
116 of problems solved with Wu’s method on IMO-AG-30 from ten to fifteen. Among those fifteen
117 problems are several that are difficult for synthetic methods and their LLM-augmented versions that
118 are currently most popular.
119 To the best of our knowledge, our symbolic baseline is the only symbolic baseline performing
120 above the average IMO contestant and approaching the performance of an IMO silver medalist on
121 geometry. Similarly, our combination of AlphaGeomtery with Wu’s method is the first AI system
122 to outperform a human gold-medalist at IMO geometry problems. This achievement illustrates the
123 complementarity of algebraic and synthetic methods in this area (see Figure 1 B). The usefulness
124 of algebraic approaches is most obvious from the two problems 2008 P1B and 2021 P3 which are
125 currently solved by no automatic theorem prover other than Wu’s method.
126 While algebraic methods have always been recognized for their theoretical guarantees, their usefulness
127 has been previously questioned for being too slow and not human interpretable. Our observations
128 indicate that on several problems Wu’s Method performs more efficiently than previously recognized,
129 and we advocate against dismissing it solely on the basis of its inability to produce human-readable
130 proofs.
131 **4** **Limitations and Future Directions**
132 Despite the theoretical promise, our results are a work-in-progress, currently hindered by the scarce
133 availability of existing implementations, each with their significant inadequacies including limited
134 constructions and suboptimal performance. We believe it might be feasible to outperform AlphaGeom
135 etry’s proving capabilities through purely traditional methods and hope our note encourages improving
136 current software for classical computational approaches in this area. Exploring improvements in the
137 capabilities of other symbolic methods, including synthetic ones, in addition to extending the scope
138 of geometry-specific languages and proof systems might be exciting directions to investigate.
139 Our exploration highlighting the complementary strengths of synthetic methods, which mimic human
140 reasoning processes, and more abstract algebraic methods is motivated by the idea that the similarity
141 to human reasoning and the generality of intelligence are distinct concepts, each with its own merits
142 and applications. We believe that the strength of algebraic methods goes beyond solving Olympiad
143 geometry problems, promising significant advancements in areas as varied as compiler verification
144 and beyond. This potential underscores our belief in the necessity to broaden the scope of challenges
145 addressed by automated theorem proving. The development of future benchmarks should strive for
146 diversity and potentially open-ended testing. Embracing a wider array of problems will likely bring
147 new insights on the usefulness, limitations, and interplay of neural and symbolic methods for general
148 reasoning skills.
2Peter Novotný similarly proved 11 of the 17 IMO Geometry problems from 1984–2003 using the Gröbner
[basis method, although only after manually adding non-degeneracy conditions [1] as referenced here.](https://leanprover-community.github.io/archive/stream/208328-IMO-grand-challenge/topic/geometry.20problems.html)
-----
149 **References**
150 [[1] Peter Novotný’s Masters Thesis. https://skmo.sk/cvika/ukazpdf.php?pdf=diplomka.pdf.](https://skmo.sk/cvika/ukazpdf.php?pdf=diplomka.pdf)
151 [[2] IMO Grand Challenge. https://imo-grand-challenge.github.io/, 2019. Online; accessed 29](https://imo-grand-challenge.github.io/)
152 May 2024.
153 [[3] AIMO Prize. https://aimoprize.com/, 2023. Online; accessed 29 May 2024.](https://aimoprize.com/)
154 [4] Jeremy Avigad, Edward Dean, and John Mumma. A formal system for euclid’s elements. The Review of
155 _Symbolic Logic, 2009._
156 [5] Michael Beeson, Pierre Boutry, Gabriel Braun, Charly Gries, and Julien Narboux. Geocoq. 2018.
157 [6] S.C. Chou, X.S. Gao, and J.Z. Zhang. A deductive database approach to automated geometry theorem
158 proving and discovering. Journal of Automated Reasoning, 2000. doi: 10.1023/A:1006171315513.
159 [7] Shang-Ching Chou. Proving elementary geometry theorems using Wu’s algorithm. In Woodrow Wilson
160 Bledsoe and Donald W Loveland, editors, Automated Theorem Proving: After 25 Years, volume 89.
161 American Mathematical Soc., 1984.
162 [8] Shang-Ching Chou. An introduction to Wu’s method for mechanical theorem proving in geometry. Journal
163 _of Automated Reasoning, 1988._
164 [9] Shang-Ching Chou, Xiao-Shan Gao, and Jing-Zhong Zhang. Automated production of traditional proofs
165 in solid geometry. Journal of Automated Reasoning, 14(2):257–291, 1995.
166 [10] Nicolaas Govert de Bruijn. AUTOMATH, a language for mathematics. 1983.
167 [11] Giuseppa Carr‘a Ferro, Giovanni Gallo, and Rosario Gennaro. Probabilistic verification of elementary
168 geometry statements. In Automated Deduction in Geometry, 1997. doi: 10.1007/BFb0022721.
169 [12] H. Gelernter. Realization of a geometry-theorem proving machine. Computers & Thought, 1995. doi:
170 10.5555/207644.207647.
171 [[13] W. T. Gowers.](https://drive.google.com/file/d/1-FFa6nMVg18m1zPtoAQrFalwpx2YaGK4/view) How can it be feasible to find proofs? `https://drive.google.com/file/d/`
172 `1-FFa6nMVg18m1zPtoAQrFalwpx2YaGK4/view. Online; accessed 7 April 2024.`
173 [14] Thomas Little Heath et al. The thirteen books of Euclid’s Elements. 1956.
174 [15] Deepak Kapur. Using Gröbner bases to reason about geometry problems. Journal of Symbolic Computation,
175 1986.
176 [16] Deepak Kapur. A refutational approach to geometry theorem proving. Artificial Intelligence, 1988.
177 [17] Michelle Y. Kim. Visual reasoning in geometry theorem proving. In Proceedings of the Eleventh
178 _International Joint Conference on Artificial Intelligence, 1989._
179 [18] Zoltán Kovács and Alexander Vujic. Open source prover in the attic. arXiv preprint arXiv:2401.13702,
180 2024.
181 [19] Zoltán Kovács, Tomas Recio, Luis F Tabera, and M Pilar Vélez. Dealing with degeneracies in automated
182 theorem proving in geometry. Mathematics, 2021.
183 [20] Arthur J Nevins. Plane geometry theorem proving using forward chaining. Artificial Intelligence, 6(1):
184 1–23, 1975.
185 [21] Jürgen Richter-Gebert and Ulrich Kortenkamp. The Interactive Geometry Software Cinderella. 1999.
186 [22] Max Schindler and Evan Chen. Barycentric coordinates in olympiad geometry. Olympiad Articles, 2012.
187 [[23] Justin Stevens. Coordinate and trigonometry bashing. http://services.artofproblemsolving.com/](http://services.artofproblemsolving.com/download.php?id=YXR0YWNobWVudHMvYi9kLzRmMTA5OWJhNmI1MTg2YzM2ODdkZTVhYTJjMGU0NjdmYmViNGRk&rn=Q29vcmRpbmF0ZSBhbmQgVHJpZ29ub21ldHJ5IEJhc2hpbmcucGRm)
188 `download.php?id=YXR0YWNobWVudHMvYi9kLzRmMTA5OWJhNmI1MTg2YzM2ODdkZTVhYTJjMGU0NjdmYmViNGRk&`
189 `rn=Q29vcmRpbmF0ZSBhbmQgVHJpZ29ub21ldHJ5IEJhc2hpbmcucGRm. Accessed: 4 April 2024.`
190 [24] Trieu H Trinh, Yuhuai Wu, Quoc V Le, He He, and Thang Luong. Solving olympiad geometry without
191 human demonstrations. Nature, 2024.
192 [25] Dongming Wang. Reasoning about geometric problems using an elimination method. Automated practical
193 _reasoning: Algebraic approaches, pages 147–185, 1995._
194 [26] Wu Wen-Tsün. On the decision problem and the mechanization of theorem proving in elementary geometry.
195 _Scientia Sinica, 1978._
196 [27] Wenjun Wu. On zeros of algebraic equations–an application of ritt principle. Kexue Tongbao, 1986.
197 [28] Z. Ye, S.C. Chou, and X.S. Gao. An introduction to java geometry expert – (extended abstract). 2011. doi:
198 10.1007/978-3-642-21046-4_10.
-----
199 **A** **Detailed Comparisons**
200 We compare with all human and automated methods on the IMO-AG-30 benchmark [24] in Figure
201 2. Our evaluation includes GPT4, Full-Angle method (FA), Gröbner Basis (Gröbner), Deductive
202 Databases (DD), Deductive Databases combined with Algebraic Rules and enhancements with GPT-4
203 for construction suggestions (DD+AR+GPT4). Additionally, we examined different configurations
204 of the AlphaGeometry model: one only pretrained on 100 million samples (PT-100M) and other only
205 finetuned on 9 million constructions (FT-9M). Note that we construct the Wu&DD+AR baseline by
206 simply parallely running both Wu’s and DD+AR methods and stopping when either method solves the
207 problem. Similarly, we construct the Wu&AlphaGeometry baseline. We see that our Wu&DD+AR
208 baseline matches AG (FT-9M) baseline while Wu’s method alone matches the best DD+AR+GPT4
209 algorithm.
Figure 2: Extended version of Figure 1A: Performance across symbolic and LLM-Augmented
methods on the IMO-AG-30 problem set, along with human performance. The performance of
additional models adopted from Table 1 in [24] are shown on the right.
210 **B** **Illustrations: 2008 P1B and 2021 P3**
211 We provide illustrations of the solutions of Wu’s method for the two problems AlphaGeometry could
212 not solve to allow for additional scrutiny without having to reproduce the same on the JGEX solver.
-----
Problem (In JGEX)
Generated Solution →
Problem (In JGEX)
Generated Solution →
Figure 3: Problem 2008-P1B JGEX (Above) and 2021-P3 (Below) with Input (Left) and Gener**ated Solution (Right) for Wu’s method. This illustration can be reproduced by opening the .gex**
files provided alongside on the HuggingFace repository and pressing Run.
-----
213 **NeurIPS Paper Checklist**
214 1. Claims
215 Question: Do the main claims made in the abstract and introduction accurately reflect the
216 paper’s contributions and scope?
217 Answer: [Yes]
218 Justification: The abstract and/or introduction clearly state the claims made, including the
219 contributions made in the paper along with important assumptions and limitations.
220 Guidelines:
221 - The answer NA means that the abstract and introduction do not include the claims
222 made in the paper.
223 - The abstract and/or introduction should clearly state the claims made, including the
224 contributions made in the paper and important assumptions and limitations. A No or
225 NA answer to this question will not be perceived well by the reviewers.
226 - The claims made should match theoretical and experimental results, and reflect how
227 much the results can be expected to generalize to other settings.
228 - It is fine to include aspirational goals as motivation as long as it is clear that these goals
229 are not attained by the paper.
230 2. Limitations
231 Question: Does the paper discuss the limitations of the work performed by the authors?
232 Answer: [Yes]
233 Justification: We acknowledge limitations concerning a non-extensive set of constructions
234 to support the algebraic methods, underdeveloped software tools, and challenges regarding
235 human interpretability, among other details in Sections 1 and 4.
236 Guidelines:
237 - The answer NA means that the paper has no limitation while the answer No means that
238 the paper has limitations, but those are not discussed in the paper.
239 - The authors are encouraged to create a separate "Limitations" section in their paper.
240 - The paper should point out any strong assumptions and how robust the results are to
241 violations of these assumptions (e.g., independence assumptions, noiseless settings,
242 model well-specification, asymptotic approximations only holding locally). The authors
243 should reflect on how these assumptions might be violated in practice and what the
244 implications would be.
245 - The authors should reflect on the scope of the claims made, e.g., if the approach was
246 only tested on a few datasets or with a few runs. In general, empirical results often
247 depend on implicit assumptions, which should be articulated.
248 - The authors should reflect on the factors that influence the performance of the approach.
249 For example, a facial recognition algorithm may perform poorly when image resolution
250 is low or images are taken in low lighting. Or a speech-to-text system might not be
251 used reliably to provide closed captions for online lectures because it fails to handle
252 technical jargon.
253 - The authors should discuss the computational efficiency of the proposed algorithms
254 and how they scale with dataset size.
255 - If applicable, the authors should discuss possible limitations of their approach to
256 address problems of privacy and fairness.
257 - While the authors might fear that complete honesty about limitations might be used by
258 reviewers as grounds for rejection, a worse outcome might be that reviewers discover
259 limitations that aren’t acknowledged in the paper. The authors should use their best
260 judgment and recognize that individual actions in favor of transparency play an impor
261 tant role in developing norms that preserve the integrity of the community. Reviewers
262 will be specifically instructed to not penalize honesty concerning limitations.
263 3. Theory Assumptions and Proofs
264 Question: For each theoretical result, does the paper provide the full set of assumptions and
265 a complete (and correct) proof?
-----
266 Answer: [NA]
267 Justification: The theoretical underpinnings of Euclidean geometry [14, 4], along with the
268 synthetic [12, 6, 20] and algebraic methods [26, 25, 15] of automated theorem proving
269 discussed in our work are covered extensively in past literature available in our references.
270 We do not claim any novel theoretical contributions.
271 Guidelines:
272 - The answer NA means that the paper does not include theoretical results.
273 - All the theorems, formulas, and proofs in the paper should be numbered and cross
274 referenced.
275 - All assumptions should be clearly stated or referenced in the statement of any theorems.
276 - The proofs can either appear in the main paper or the supplemental material, but if
277 they appear in the supplemental material, the authors are encouraged to provide a short
278 proof sketch to provide intuition.
279 - Inversely, any informal proof provided in the core of the paper should be complemented
280 by formal proofs provided in appendix or supplemental material.
281 - Theorems and Lemmas that the proof relies upon should be properly referenced.
282 4. Experimental Result Reproducibility
283 Question: Does the paper fully disclose all the information needed to reproduce the main ex
284 perimental results of the paper to the extent that it affects the main claims and/or conclusions
285 of the paper (regardless of whether the code and data are provided or not)?
286 Answer: [Yes]
287 Justification: We provide detailed descriptions of the dataset (derived from IMO-AG-30),
288 software (e.g., JGEX), and experimental setup, including the hardware and the methods used
289 for the tests on the benchmark. We also release the dataset containing manual translations
290 for each problem. This should be sufficient for reproducing the results.
291 Guidelines:
292 - The answer NA means that the paper does not include experiments.
293 - If the paper includes experiments, a No answer to this question will not be perceived
294 well by the reviewers: Making the paper reproducible is important, regardless of
295 whether the code and data are provided or not.
296 - If the contribution is a dataset and/or model, the authors should describe the steps taken
297 to make their results reproducible or verifiable.
298 - Depending on the contribution, reproducibility can be accomplished in various ways.
299 For example, if the contribution is a novel architecture, describing the architecture fully
300 might suffice, or if the contribution is a specific model and empirical evaluation, it may
301 be necessary to either make it possible for others to replicate the model with the same
302 dataset, or provide access to the model. In general. releasing code and data is often
303 one good way to accomplish this, but reproducibility can also be provided via detailed
304 instructions for how to replicate the results, access to a hosted model (e.g., in the case
305 of a large language model), releasing of a model checkpoint, or other means that are
306 appropriate to the research performed.
307 - While NeurIPS does not require releasing code, the conference does require all submis
308 sions to provide some reasonable avenue for reproducibility, which may depend on the
309 nature of the contribution. For example
310 (a) If the contribution is primarily a new algorithm, the paper should make it clear how
311 to reproduce that algorithm.
312 (b) If the contribution is primarily a new model architecture, the paper should describe
313 the architecture clearly and fully.
314 (c) If the contribution is a new model (e.g., a large language model), then there should
315 either be a way to access this model for reproducing the results or a way to reproduce
316 the model (e.g., with an open-source dataset or instructions for how to construct
317 the dataset).
318 (d) We recognize that reproducibility may be tricky in some cases, in which case
319 authors are welcome to describe the particular way they provide for reproducibility.
-----
320 In the case of closed-source models, it may be that access to the model is limited in
321 some way (e.g., to registered users), but it should be possible for other researchers
322 to have some path to reproducing or verifying the results.
323 5. Open access to data and code
324 Question: Does the paper provide open access to the data and code, with sufficient instruc
325 tions to faithfully reproduce the main experimental results, as described in supplemental
326 material?
327 Answer: [Yes]
328 Justification: We provide access to the translated dataset of IMO problems, links to relevant
329 software, list of methods used, and the exact steps for performing the experiments.
330 Guidelines:
331 - The answer NA means that paper does not include experiments requiring code.
332 [• Please see the NeurIPS code and data submission guidelines (https://nips.cc/](https://nips.cc/public/guides/CodeSubmissionPolicy)
333 `public/guides/CodeSubmissionPolicy) for more details.`
334 - While we encourage the release of code and data, we understand that this might not be
335 possible, so “No” is an acceptable answer. Papers cannot be rejected simply for not
336 including code, unless this is central to the contribution (e.g., for a new open-source
337 benchmark).
338 - The instructions should contain the exact command and environment needed to run to
339 [reproduce the results. See the NeurIPS code and data submission guidelines (https:](https://nips.cc/public/guides/CodeSubmissionPolicy)
340 `//nips.cc/public/guides/CodeSubmissionPolicy) for more details.`
341 - The authors should provide instructions on data access and preparation, including how
342 to access the raw data, preprocessed data, intermediate data, and generated data, etc.
343 - The authors should provide scripts to reproduce all experimental results for the new
344 proposed method and baselines. If only a subset of experiments are reproducible, they
345 should state which ones are omitted from the script and why.
346 - At submission time, to preserve anonymity, the authors should release anonymized
347 versions (if applicable).
348 - Providing as much information as possible in supplemental material (appended to the
349 paper) is recommended, but including URLs to data and code is permitted.
350 6. Experimental Setting/Details
351 Question: Does the paper specify all the training and test details (e.g., data splits, hyper
352 parameters, how they were chosen, type of optimizer, etc.) necessary to understand the
353 results?
354 Answer: [Yes]
355 Justification: In our dataset, we provide the precise translations, references to their corre
356 sponding IMO problems, and a link to the JGEX software that can parse these translations
357 and execute the relevant methods. We also provide the time limits and computational
358 constraints applied during testing (Section 2).
359 Guidelines:
360 - The answer NA means that the paper does not include experiments.
361 - The experimental setting should be presented in the core of the paper to a level of detail
362 that is necessary to appreciate the results and make sense of them.
363 - The full details can be provided either with the code, in appendix, or as supplemental
364 material.
365 7. Experiment Statistical Significance
366 Question: Does the paper report error bars suitably and correctly defined or other appropriate
367 information about the statistical significance of the experiments?
368 Answer: [Yes]
369 Justification: Since our evaluations are based on discrete problem-solving results on IMO
370 problems using fixed translations and deterministic software tools, we highlight that our
371 experiments are exactly reproducible. As a result, tests of statistical significance or error
372 bars are not applicable.
-----
373 Guidelines:
374 - The answer NA means that the paper does not include experiments.
375 - The authors should answer "Yes" if the results are accompanied by error bars, confi
376 dence intervals, or statistical significance tests, at least for the experiments that support
377 the main claims of the paper.
378 - The factors of variability that the error bars are capturing should be clearly stated (for
379 example, train/test split, initialization, random drawing of some parameter, or overall
380 run with given experimental conditions).
381 - The method for calculating the error bars should be explained (closed form formula,
382 call to a library function, bootstrap, etc.)
383 - The assumptions made should be given (e.g., Normally distributed errors).
384 - It should be clear whether the error bar is the standard deviation or the standard error
385 of the mean.
386 - It is OK to report 1-sigma error bars, but one should state it. The authors should
387 preferably report a 2-sigma error bar than state that they have a 96% CI, if the hypothesis
388 of Normality of errors is not verified.
389 - For asymmetric distributions, the authors should be careful not to show in tables or
390 figures symmetric error bars that would yield results that are out of range (e.g. negative
391 error rates).
392 - If error bars are reported in tables or plots, The authors should explain in the text how
393 they were calculated and reference the corresponding figures or tables in the text.
394 8. Experiments Compute Resources
395 Question: For each experiment, does the paper provide sufficient information on the com
396 puter resources (type of compute workers, memory, time of execution) needed to reproduce
397 the experiments?
398 Answer: [Yes]
399 Justification: We describe the computing resources used, specifically a CPU-only laptop
400 with an AMD Ryzen 7 5800H processor and 16 GB RAM, as well as the time constraints
401 per problem (Section 2).
402 Guidelines:
403 - The answer NA means that the paper does not include experiments.
404 - The paper should indicate the type of compute workers CPU or GPU, internal cluster,
405 or cloud provider, including relevant memory and storage.
406 - The paper should provide the amount of compute required for each of the individual
407 experimental runs as well as estimate the total compute.
408 - The paper should disclose whether the full research project required more compute
409 than the experiments reported in the paper (e.g., preliminary or failed experiments that
410 didn’t make it into the paper).
411 9. Code Of Ethics
412 Question: Does the research conducted in the paper conform, in every respect, with the
413 [NeurIPS Code of Ethics https://neurips.cc/public/EthicsGuidelines?](https://neurips.cc/public/EthicsGuidelines)
414 Answer: [Yes]
415 Justification: We have read the ethics guidelines and confirm that we do not use human
416 subjects, respect the licensed use of dataset (IMO-AG-30), and do not include any personally
417 identifiable information in the dataset of translated IMO problems introduced by us.
418 Guidelines:
419 - The answer NA means that the authors have not reviewed the NeurIPS Code of Ethics.
420 - If the authors answer No, they should explain the special circumstances that require a
421 deviation from the Code of Ethics.
422 - The authors should make sure to preserve anonymity (e.g., if there is a special consid
423 eration due to laws or regulations in their jurisdiction).
424 10. Broader Impacts
-----
425 Question: Does the paper discuss both potential positive societal impacts and negative
426 societal impacts of the work performed?
427 Answer: [Yes]
428 Justification: We discuss the importance of advancing automated reasoning systems, which
429 could also lead to improvements formal verification systems and educational tools. We do
430 not foresee any negative societal impacts from this research.
431 Guidelines:
432 - The answer NA means that there is no societal impact of the work performed.
433 - If the authors answer NA or No, they should explain why their work has no societal
434 impact or why the paper does not address societal impact.
435 - Examples of negative societal impacts include potential malicious or unintended uses
436 (e.g., disinformation, generating fake profiles, surveillance), fairness considerations
437 (e.g., deployment of technologies that could make decisions that unfairly impact specific
438 groups), privacy considerations, and security considerations.
439 - The conference expects that many papers will be foundational research and not tied
440 to particular applications, let alone deployments. However, if there is a direct path to
441 any negative applications, the authors should point it out. For example, it is legitimate
442 to point out that an improvement in the quality of generative models could be used to
443 generate deepfakes for disinformation. On the other hand, it is not needed to point out
444 that a generic algorithm for optimizing neural networks could enable people to train
445 models that generate Deepfakes faster.
446 - The authors should consider possible harms that could arise when the technology is
447 being used as intended and functioning correctly, harms that could arise when the
448 technology is being used as intended but gives incorrect results, and harms following
449 from (intentional or unintentional) misuse of the technology.
450 - If there are negative societal impacts, the authors could also discuss possible mitigation
451 strategies (e.g., gated release of models, providing defenses in addition to attacks,
452 mechanisms for monitoring misuse, mechanisms to monitor how a system learns from
453 feedback over time, improving the efficiency and accessibility of ML).
454 11. Safeguards
455 Question: Does the paper describe safeguards that have been put in place for responsible
456 release of data or models that have a high risk for misuse (e.g., pretrained language models,
457 image generators, or scraped datasets)?
458 Answer: [NA]
459 Justification: The dataset of translated IMO problems released here does not pose high risks
460 for misuse.
461 Guidelines:
462 - The answer NA means that the paper poses no such risks.
463 - Released models that have a high risk for misuse or dual-use should be released with
464 necessary safeguards to allow for controlled use of the model, for example by requiring
465 that users adhere to usage guidelines or restrictions to access the model or implementing
466 safety filters.
467 - Datasets that have been scraped from the Internet could pose safety risks. The authors
468 should describe how they avoided releasing unsafe images.
469 - We recognize that providing effective safeguards is challenging, and many papers do
470 not require this, but we encourage authors to take this into account and make a best
471 faith effort.
472 12. Licenses for existing assets
473 Question: Are the creators or original owners of assets (e.g., code, data, models), used in
474 the paper, properly credited and are the license and terms of use explicitly mentioned and
475 properly respected?
476 Answer: [Yes]
-----
477 Justification: We credit the creators of the JGEX software used to implement Wu’s method,
478 the IMO-AG-30 dataset, as well as other cited methods and datasets, as referenced throughout
479 the paper.
480 Guidelines:
481 - The answer NA means that the paper does not use existing assets.
482 - The authors should cite the original paper that produced the code package or dataset.
483 - The authors should state which version of the asset is used and, if possible, include a
484 URL.
485 - The name of the license (e.g., CC-BY 4.0) should be included for each asset.
486 - For scraped data from a particular source (e.g., website), the copyright and terms of
487 service of that source should be provided.
488 - If assets are released, the license, copyright information, and terms of use in the
489 package should be provided. For popular datasets, paperswithcode.com/datasets
490 has curated licenses for some datasets. Their licensing guide can help determine the
491 license of a dataset.
492 - For existing datasets that are re-packaged, both the original license and the license of
493 the derived asset (if it has changed) should be provided.
494 - If this information is not available online, the authors are encouraged to reach out to
495 the asset’s creators.
496 13. New Assets
497 Question: Are new assets introduced in the paper well documented and is the documentation
498 provided alongside the assets?
499 Answer: [Yes]
500 Justification: The dataset of translated IMO problems introduced in the paper has extensive
501 documentation, including details on reproducing our results using relevant software.
502 Guidelines:
503 - The answer NA means that the paper does not release new assets.
504 - Researchers should communicate the details of the dataset/code/model as part of their
505 submissions via structured templates. This includes details about training, license,
506 limitations, etc.
507 - The paper should discuss whether and how consent was obtained from people whose
508 asset is used.
509 - At submission time, remember to anonymize your assets (if applicable). You can either
510 create an anonymized URL or include an anonymized zip file.
511 14. Crowdsourcing and Research with Human Subjects
512 Question: For crowdsourcing experiments and research with human subjects, does the paper
513 include the full text of instructions given to participants and screenshots, if applicable, as
514 well as details about compensation (if any)?
515 Answer: [NA]
516 Justification: The paper does not involve crowdsourcing or research with human subjects.
517 Guidelines:
518 - The answer NA means that the paper does not involve crowdsourcing nor research with
519 human subjects.
520 - Including this information in the supplemental material is fine, but if the main contribu
521 tion of the paper involves human subjects, then as much detail as possible should be
522 included in the main paper.
523 - According to the NeurIPS Code of Ethics, workers involved in data collection, curation,
524 or other labor should be paid at least the minimum wage in the country of the data
525 collector.
526 15. Institutional Review Board (IRB) Approvals or Equivalent for Research with Human
527 **Subjects**
-----
528 Question: Does the paper describe potential risks incurred by study participants, whether
529 such risks were disclosed to the subjects, and whether Institutional Review Board (IRB)
530 approvals (or an equivalent approval/review based on the requirements of your country or
531 institution) were obtained?
532 Answer: [NA]
533 Justification: The paper does not involve research with human subjects, so IRB approval is
534 not applicable.
535 Guidelines:
536 - The answer NA means that the paper does not involve crowdsourcing nor research with
537 human subjects.
538 - Depending on the country in which research is conducted, IRB approval (or equivalent)
539 may be required for any human subjects research. If you obtained IRB approval, you
540 should clearly state this in the paper.
541 - We recognize that the procedures for this may vary significantly between institutions
542 and locations, and we expect authors to adhere to the NeurIPS Code of Ethics and the
543 guidelines for their institution.
544 - For initial submissions, do not include any information that would break anonymity (if
545 applicable), such as the institution conducting the review.
-----
| [
"Shiven, Sinha",
"Ameya, Prabhu",
"Ponnurangam, Kumaraguru",
"Siddharth, Bhat",
"Matthias, Bethge"
] | 2024-10-10T00:00:00 | NeurIPS 2024 Workshop MATH-AI | false | 0 | 0 | null | https://openreview.net/forum?id=aKRtC45gle | null | null |
ZIP-FIT: Embedding-Free Data Selection via Compression-Based Alignment | Data selection is crucial for optimizing language model (LM) performance on specific tasks, yet most existing methods fail to effectively consider the target task distribution. Current approaches either ignore task-specific requirements entirely or rely on approximations that fail to capture the nuanced patterns needed for tasks like Autoformalization or code generation. Methods that do consider the target distribution often rely on simplistic, sometimes noisy, representations, like hashed n-gram features, which can lead to collisions and introduce noise. We introduce ZIP-FIT, a data selection framework that uses gzip compression to directly measure alignment between potential training data and the target task distribution. In extensive evaluations on Autoformalization and Python code generation, ZIP-FIT significantly outperforms leading baselines like DSIR and D4. Models trained on ZIP-FIT-selected data achieve their lowest cross-entropy loss up to 85.1\% faster than baselines, demonstrating that better task alignment leads to more efficient learning. In addition, ZIP-FIT performs selection up to 65.8\% faster than DSIR and two orders of magnitude faster than D4. Notably, ZIP-FIT shows that smaller, well-aligned datasets often outperform larger but less targeted ones, demonstrating that a small amount of higher quality data is superior to a large amount of lower quality data. Our results imply that task-aware data selection is crucial for efficient domain adaptation, and that compression offers a principled way to measure task alignment. By showing that targeted data selection can dramatically improve task-specific performance, our work provides new insights into the relationship between data quality, task alignment, and model learning efficiency. | null | # ZIP-FIT: EMBEDDING-FREE DATA SELECTION VIA COMPRESSION-BASED ALIGNMENT
**Anonymous authors**
Paper under double-blind review
ABSTRACT
Selecting high-quality, aligned fine-tuning data is crucial for improving the downstream performance of language models (LMs). Automatic data selection in these
scenarios is challenging and often inefficient due to existing approaches relying on
neural embeddings or limited n-gram representations to identify aligned datasets.
In addition, traditional data selection methods often focus on increasing the size
of the training data, making them computationally expensive to use and data inefficient. In this work, we introduce ZIP-FIT, an embedding-free, data-efficient
selection framework that leverages gzip compression to measure the alignment
between training data and target domains. We show that ZIP-FIT significantly
outperforms two leading baselines, DSIR and D4, in selecting high-quality data
for ProofNet, a formal mathematical dataset, and HumanEval, a benchmark for
code generation tasks. Specifically, ZIP-FIT demonstrates a computational speed
advantage, performing data selection up to 65.8% faster than DSIR and achieving
its lowest cross-entropy loss up to 85.1% faster. Our findings suggest that ZIP-FIT
offers a scalable and adaptable approach for data selection, enabling more precise
fine-tuning for code generation domains. By demonstrating that embedding-free
data selection can outperform established methods like DSIR and D4, our research
opens new avenues for optimizing model training, thereby enhancing the effectiveness and efficiency of machine learning workflows.
1 INTRODUCTION
The choice of training data is crucial for the performance of language models (LMs) in both generalpurpose and domain-specific applications (Brown et al., 2020; Gururangan et al., 2020; Hoffmann
et al., 2022). Much of the current research has focused on curating diverse pre-training datasets to
enhance model performance across a wide range of tasks (Xie et al., 2023b; Tirumala et al., 2023;
Abbas et al., 2023; Xie et al., 2023a). Although these methods have been demonstrated to work
well for general pre-training, they fall short in domain-specific fine-tuning, where data relevance is
crucial.
A key question emerges: How can we effectively select fine-tuning data for domain-specific tasks?
One approach is to train binary classifiers to identify relevant data. For example, DeepSeekMath
(Shao et al., 2024), a mathematical language model, utilized OpenWebMath (Paster et al., 2023),
a compilation of high-quality mathematical texts, to train a FastText classifier (Bojanowski et al.,
2017) to retrieve analogous texts from the Web. Although effective, this method relies on the availability of large and well-annotated data sets, something that is often missing in niche tasks where
relevant data is scarce.
Another common approach is to use neural embeddings to measure the cosine similarity between
data points and a reference corpus (Xie et al., 2023c). Although this improves relevance, embeddingbased methods are computationally expensive and sensitive to the choice of embedding space
(Muennighoff, 2022). Alternatively, DSIR (Data Selection with Importance Resampling) (Xie et al.,
2023b) utilizes unigrams and bigrams to select data points without the need for pre-trained embeddings, aiming to match the n-gram distributions of the target data. While effective for capturing
immediate textual correlations, this approach may fall short in addressing the structural complexities required for tasks like code generation or Autoformalization, where translating natural language
into formal languages such as Lean4 (Moura et al., 2015) demands a deeper understanding of con
**000**
**001**
**002**
**003**
**004**
**005**
**006**
**007**
**008**
**009**
**010**
**011**
**012**
**013**
**014**
**015**
**016**
**017**
**018**
**019**
**020**
**021**
**022**
**023**
**024**
**025**
**026**
**027**
**028**
**029**
**030**
**031**
**032**
**033**
**034**
**035**
**036**
**037**
**038**
**039**
**040**
**041**
**042**
**043**
**044**
**045**
**046**
**047**
**048**
**049**
**050**
**051**
**052**
**053**
-----
**054**
**055**
**056**
**057**
**058**
**059**
**060**
**061**
**062**
**063**
**064**
**065**
**066**
**067**
**068**
**069**
**070**
**071**
**072**
**073**
**074**
**075**
**076**
**077**
**078**
**079**
**080**
**081**
**082**
**083**
**084**
**085**
**086**
**087**
**088**
**089**
**090**
**091**
**092**
**093**
**094**
**095**
**096**
**097**
**098**
**099**
**100**
**101**
**102**
**103**
**104**
**105**
**106**
**107**
Figure 1: ZIP-FIT selects aligned source data to a target domain for efficient fine-tuning of
**Large Language Models. It ranks all samples in the source dataset with a target/proxy task (e.g.,**
validation set) using gzip compression alignment (ZIP-FIT), selects the top k examples with highest
ZIP-FIT alignment scores, and fine-tunes the model for improved domain-specific performance.
text. This limitation suggests exploring alternative data selection strategies suitable for low-resource
scenarios.
To address these challenges, we propose ZIP-FIT, a novel data selection framework that leverages
gzip’s compression ability. Existing research suggests that language modeling and data compression
are fundamentally equivalent tasks (Del´etang et al., 2024), and the intelligence of large language
models (LLMs) is closely related to their ability to compress external corpora (Huang et al., 2024).
This insight suggests that compression algorithms can encode information in ways similar to neural
networks. For example, Jiang et al. (2023) found that using normalized compression distance for text
classification outperformed traditional neural embeddings. Inspired by this, ZIP-FIT selects aligned
training data with a target data set based on compression-based alignment, providing a lightweight
and embedding-free method for selecting high-quality data.
We evaluated ZIP-FIT across two distinct domains: Autoformalization and Python code generation.
We show that ZIP-FIT outperforms existing data-selection frameworks and consistently improves
model performance, particularly when evaluating cross-entropy test loss for the target domains.
Our experiments reveal that smaller, well-aligned datasets, chosen through ZIP-FIT, lead to faster
convergence and superior performance compared to using larger, less-aligned datasets, underscoring
the importance of aligned data over quantity. Crucially, ZIP-FIT achieves these results with minimal
computational overhead, eliminating the need for costly neural embeddings or resource-intensive
hardware, such as GPUs.
Our contributions are as follows:
1. The introduction of ZIP-FIT, an embedding-free data selection method using gzip compression.
2. Empirical Performance: We demonstrate that ZIP-FIT consistently outperforms leading
baselines (DSIR, D4) in Autoformalization and Python code generation, achieving up to
85.1% faster convergence and lower cross-entropy loss.
3. Efficiency: ZIP-FIT is computationally efficient, running up to 65.8% faster than DSIR.
This makes it scalable for low-resource environments without compromising performance.
2 ZIP-FIT: AN EMBEDDING-FREE DATA SELECTION ALGORITHM VIA
COMPRESSION-BASED ALIGNMENT FOR LM FINE-TUNING
2.1 BACKGROUND: HOW GZIP COMPRESSION WORKS
gzip uses two main techniques for compression: LZ77 and Huffman coding. Together, these methods compress sequences by recognizing and exploiting repeated patterns within the text.
-----
**108**
**109**
**110**
**111**
**112**
**113**
**114**
**115**
**116**
**117**
**118**
**119**
**120**
**121**
**122**
**123**
**124**
**125**
**126**
**127**
**128**
**129**
**130**
**131**
**132**
**133**
**134**
**135**
**136**
**137**
**138**
**139**
**140**
**141**
**142**
**143**
**144**
**145**
**146**
**147**
**148**
**149**
**150**
**151**
**152**
**153**
**154**
**155**
**156**
**157**
**158**
**159**
**160**
**161**
**LZ77 Compression:** LZ77 works by identifying repeated substrings and replacing them with references to their earlier occurrences.
**Huffman Coding:** After LZ77, Huffman coding further compresses the data by assigning shorter
binary codes to more frequent symbols, optimizing the overall length of the compressed text.
The detailed mathematical description of gzip’s compression process, including the formulation of
the Normalized Compression Distance (NCD), is provided in the Appendix for reference.
2.2 ZIP-FIT ALGORITHM
**Setup: Given a set of examples x[′]1[, x]2[′]** _[, . . ., x]n[′]_ [from a target domain][ p][ and a large, source dataset]
_x1, x2, . . ., xN drawn from the distribution q, our goal is to select a subset of k examples from the_
raw dataset (k ≪ _N_ ) that align closely with the target domain. The challenge lies in identifying
which samples from the large dataset q best match the domain-specific characteristics of p, especially
in low-resource environments (e.g., no GPUs). ZIP-FIT achieves this by using gzip compression as
a alignment metric to capture patterns and redundancies in textual sequences.
To address the challenge of selecting high-quality data, we propose the ZIP-FIT algorithm.
**Algorithm 1 ZIP-FIT data selection algorithm**
1: Input: A large dataset D = _{x1, x2, . . ., xN_ _} from distribution q, target examples_
_{x[′]1[, x][′]2[, . . ., x]n[′]_ _[}][ from domain][ p][.]_
2: Output: A subset of k examples from D that closely align with the target domain p.
3: for i = 1 to N do
4: Compute alignment for each sample xi _D with each target example x[′]j_
_∈_ _∈_
_{x[′]1[, x][′]2[, . . ., x]n[′]_ _[}][ using Normalized Compression Distance:]_
5:
_j[)][ −]_ [min(][C][(][x][i][)][, C][(][x][′]j[))]
NCD(xi, x[′]j[) =][ C][(][x][i][ ⊕] _[x][′]_
max(C(xi), C(x[′]j[))]
6: where C(x) represents the compressed size of sequence x and ⊕ denotes concatenation.
7: Compute the average ZIP-FIT alignment for each xi:
8:
ZIP-FIT-Alignment(xi) = 1
_−_ _n[1]_
NCD(xi, x[′]j[)]
_j=1_
X
9: end for
10: Select the top k examples from D based on the highest alignment scores.
2.3 WHY USE COMPRESSION?
Compression algorithms, such as gzip, provide a computationally efficient way to detect patterns
and minimize redundancy in data.
**Limitations of n-grams:** Many traditional methods, including hashed n-grams, focus on capturing
immediate textual correlations by simplifying text into discrete, fixed-size buckets. Although these
techniques are computationally efficient, they may not adequately capture syntactic or structural
relationships within the data.
**Challenges with Neural Embeddings:** Neural embeddings offer a powerful tool for capturing
semantic relationships, but they come with significant computational costs. These embeddings are
typically pre-trained on large corpora and fine-tuned for specific tasks, which requires substantial resources. Given the scalability challenges of embedding-based methods, we conjecture that a simpler
method like compression can provide a more scalable and resource-efficient alternative.
We hypothesize that compression, specifically gzip, serves as a proxy for capturing syntactic and
structural relationships in textual sequences. Gzip’s ability to compress data based on redundancy
minimization can be leveraged as a metric to align text with a target domain.
-----
**162**
**163**
**164**
**165**
**166**
**167**
**168**
**169**
**170**
**171**
**172**
**173**
**174**
**175**
**176**
**177**
**178**
**179**
**180**
**181**
**182**
**183**
**184**
**185**
**186**
**187**
**188**
**189**
**190**
**191**
**192**
**193**
**194**
**195**
**196**
**197**
**198**
**199**
**200**
**201**
**202**
**203**
**204**
**205**
**206**
**207**
**208**
**209**
**210**
**211**
**212**
**213**
**214**
**215**
Figure 2: Higher gzip alignment correlates with lower cross-entropy loss. The relationship
between gzip alignment and cross-entropy (CE) loss for GPT-2 trained on (a) 4k tokens (R[2] = 0.85)
and (b) 22k tokens (R[2] = 0.90) and (c) Mistral7B trained on 22k tokens (R[2] = 0.75). Each
point represents a dataset, with its position reflecting the dataset’s gzip alignment score against the
ProofNet test set and the resulting CE loss. The dashed red line indicates the linear regression fit,
while the dashed black line shows the pretrained CE loss. Higher alignment scores correspond to
lower CE losses, underscoring the importance of data alignment.
3 HIGHER ALIGNMENT INTERVENTIONALLY LEADS TO BETTER MODEL
PERFORMANCE
**Setup:** We fine-tuned GPT-2 and Mistral7B on various datasets with different gzip alignment
scores relative to the target domain (ProofNet). We measured the cross-entropy loss (CE loss) on
the ProofNet test set for each dataset to evaluate the impact of gzip-based alignment on model
performance. The experiments were conducted using two different amounts of training tokens for
GPT2: 4k tokens (left) and 22k tokens (right).
**Results: The results, shown in figure 2, reveal a strong inverse correlation between gzip alignment**
scores and CE loss, with R[2] values of 0.85 (4k tokens) and 0.90 (22k tokens) for GPT-2 and 0.75
for Mistral7B (22k tokens). Higher gzip alignment scores consistently correspond to lower CE
loss, indicating that data alignment plays a crucial role in improving model performance. Notably,
datasets like LeanDojo and ProofNet, which exhibit high alignment scores, resulted in significantly
lower CE loss compared to less-aligned datasets such as C4 and WikiText.
4 HIGHER ALIGNMENT, HIGHER PERFORMANCE FOR AUTOFORMALIZATION
**AutoFormalization: AutoFormalization refers to the task of translating natural language mathemat-**
ical statements into a formal mathematical language such as Lean4. This process requires precise
understanding and representation of mathematical logic, making the selection of well-aligned training data crucial for effective model training.
**Setup: We fine-tuned GPT-2 (100M) and Mistral7B for the AutoFormalization task using differ-**
ent datasets scored with ZIP-FIT alignment. The x-axis in Figure 3 shows the number of training
tokens, and the y-axis shows the cross-entropy (CE) test loss. We used ProofNet (Test) for the evaluation. The curves represent different datasets with varying alignment to the target domain (ProofNet
validation). Higher alignment values indicate more targeted data selection.
**Results: The most aligned data lowers the CE loss quickest, as shown by the steep decline for high-**
alignment datasets (e.g. ProofNet [a = 0.32]). Unaligned data requires significantly more tokens
to achieve similar performance, highlighting the efficiency of selecting aligned data with ZIP-FIT.
This demonstrates that targeted data selection accelerates fine-tuning and improves performance,
reducing computational costs.
-----
**216**
**217**
**218**
**219**
**220**
**221**
**222**
**223**
**224**
**225**
**226**
**227**
**228**
**229**
**230**
**231**
**232**
**233**
**234**
**235**
**236**
**237**
**238**
**239**
**240**
**241**
**242**
**243**
**244**
**245**
**246**
**247**
**248**
**249**
**250**
**251**
**252**
**253**
**254**
**255**
**256**
**257**
**258**
**259**
**260**
**261**
**262**
**263**
**264**
**265**
**266**
**267**
**268**
**269**
Figure 3: Higher aligned data lowers cross-entropy loss more efficiently. The x-axis shows the
number of training tokens, and the y-axis represents the cross-entropy (CE) test loss. Different
curves correspond to datasets filtered by different alignment scores, indicating their relevance to the
target domain. The most aligned data (e.g. ProofNet with a = 0.32) reduce Test CE loss significantly
faster than less aligned data (e.g., C4 with a = 0.08 which worsen performance). This demonstrates
that using highly aligned data not only accelerates training but also achieves better model performance, validating the effectiveness of ZIP-FIT for data selection in fine-tuning.
5 DATA SELECTION FOR EFFICIENT FINE-TUNING USING ZIP-FIT
We evaluate ZIP-FIT on two domain-specific tasks: Autoformalization and Python Code Gener_ation. Our goal is to demonstrate ZIP-FIT’s ability to select data that leads to better fine-tuning_
performance compared to leading baselines.
5.1 AUTOFORMALIZATION
**Setup:** Our source dataset comprised approximately 185,000 sequences from sources including
UDACA/AF, LeanDojo, Algebraic Stack, C4, and WikiText. Using ZIP-FIT, DSIR and D4, sequences were evaluated for alignment with the ProofNet validation set, which served as our proxy
target domain. Different subsets of the data were selected based on the top k token rankings provided
by each method for several k values (353k, 695k tokens, etc.). These subsets were then used independently to fine-tune three different models—InterLM-Math-Plus-1.8B, Gemma2-2B, and Mistral7B—enabling us to directly compare the effectiveness of data selected by ZIP-FIT and DSIR.
The performance of each model was evaluated using the cross-entropy loss on the ProofNet test set.
**Results: As illustrated in Figure 4, ZIP-FIT significantly outperforms DSIR and D4 in reducing**
cross-entropy (CE) loss across all token selection sizes (353k, 695k). The steep decline in the
blue curves (ZIP-FIT) highlights its ability to achieve faster convergence, resulting in up to 62.79%
improvements in convergence speeds compared to DSIR. Notably, ZIP-FIT demonstrates a 65.8%
faster data selection process than DSIR, underscoring its efficiency. This rapid reduction underlines
ZIP-FIT’s efficiency in utilizing highly-algined data, especially notable in its superior performance
on InterLM-Math-Plus-1.8B, which is already optimized for mathematical data. This is surprising because one would expect little to no gains on models that already know how to do the target
task. This advantage emphasizes that ZIP-FIT not only accelerates learning, but also enhances the
effectiveness of fine-tuning, even on models predisposed to mathematics, reinforcing its utility for
AutoFormalization. Similar results were observed at other token counts, as detailed in appendix C.
5.2 CODE GENERATION
**Baseline Selection Rationale:** D4 was excluded from the code generation experiments as it was
not designed for targeted domain data selection and may deteriorate model performance. We decided to focus our efforts on comparing ZIP-FIT and DSIR, as these methods demonstrated stronger
capabilities in selecting aligned data for domain-specific fine-tuning.
-----
**270**
**271**
**272**
**273**
**274**
**275**
**276**
**277**
**278**
**279**
**280**
**281**
**282**
**283**
**284**
**285**
**286**
**287**
**288**
**289**
**290**
**291**
**292**
**293**
**294**
**295**
**296**
**297**
**298**
**299**
**300**
**301**
**302**
**303**
**304**
**305**
**306**
**307**
**308**
**309**
**310**
**311**
**312**
**313**
**314**
**315**
**316**
**317**
**318**
**319**
**320**
**321**
**322**
**323**
Figure 4: ZIP-FIT consistently achieves lower test loss more quickly than D4 and DSIR,
**demonstrating its efficiency in data selection for AutoFormalization. The plots show cross-**
entropy test loss versus the number of training tokens for three models (InterLM-Math-Plus-1.8B,
Gemma2-2B, and Mistral7B) across different token selection sizes. ZIP-FIT (blue line) consistently
outperforms both DSIR (green line) and D4 (purple line) across all model and token size configurations, highlighting its ability to process data more efficiently. The percentage labels in each plot
indicate the relative speedup of ZIP-FIT over DSIR in reaching the lowest cross-entropy loss, reinforcing the method’s scalability and adaptability for domain-specific fine-tuning.
**Setup:** We conducted code generation experiments using ZIP-FIT and DSIR to select data from
a mix of sources: MBPP (Mostly Basic Python Problems), Python code instructions (18k Alpaca),
Python docstrings, Algebraic Stack, C4, WikiText. The latter two are included to study whether
the data selection methods considered are robust to misaligned data. The datasets were utilized to
fine-tune both CodeGemma-2B and Gemma2-2B models, with the focus on translating function signatures and docstrings into executable Python code. For the selection process, we used HumanEval
for validation and a separate hold-out portion for final testing. We varied the top k selections to
explore different dataset sizes.
**Results:** Across all tested k values, ZIP-FIT consistently outperformed DSIR in reducing crossentropy loss, demonstrating faster and more effective fine-tuning. In particular, the CodeGemma-2B
model, already optimized for code-related tasks, showed substantial improvements with ZIP-FIT,
confirming its ability to select highly relevant and beneficial training data. Rapid loss reduction under ZIP-FIT emphasizes its efficiency, especially noted in its 25% faster data processing compared to
DSIR. Furthermore, flattening of the DSIR curve indicates diminishing returns, suggesting that additional tokens would not achieve the performance of ZIP-FIT. In general, these findings emphasize
that ZIP-FIT accelerates model training and optimizes resource usage, making it a superior choice
for code generation tasks.
6 IMPACT OF DATA MISALIGNMENT ON MODEL PERFORMANCE
Existing research showed the alignment of data plays a critical role in improving model performance
and learning efficiency for downstream tasks. In this section, we explore how misalignment in data
can affect model performance and how ZIP-FIT addresses this issue by appropriate data selection
-----
**324**
**325**
**326**
**327**
**328**
**329**
**330**
**331**
**332**
**333**
**334**
**335**
**336**
**337**
**338**
**339**
**340**
**341**
**342**
**343**
**344**
**345**
**346**
**347**
**348**
**349**
**350**
**351**
**352**
**353**
**354**
**355**
**356**
**357**
**358**
**359**
**360**
**361**
**362**
**363**
**364**
**365**
**366**
**367**
**368**
**369**
**370**
**371**
**372**
**373**
**374**
**375**
**376**
**377**
Figure 5: ZIP-FIT accelerates cross-entropy loss reduction in code generation tasks, even in
**code-specialized models like CodeGemma-2B. The plots show cross-entropy test loss versus the**
number of training tokens for Gemma2-2B (top row) and CodeGemma-2B (bottom row) across
different token selection sizes. ZIP-FIT (blue) consistently reduces loss faster than DSIR (green),
achieving up to 85.11% speed improvement at lower token counts. These results demonstrate ZIPFIT’s efficiency in data selection for fine-tuning models on code-geneation tasks.
**Setup: We fine-tuned the Mistral7B model on the ProofNet dataset, filtering data with ZIP-FIT**
at different alignment thresholds (>0.1, >0.2, >0.3). Figure 6 shows cross-entropy test loss (yaxis) versus the number of training tokens (x-axis). The dashed line marks the pretrained Mistral7B
baseline.
**Results: ZIP-FIT selected data achieves lower cross-entropy loss faster than training on all data,**
showing improved performance and efficiency. Higher alignment thresholds result in a steeper loss
reduction, confirming that filtering out misaligned data enhances fine-tuning.Misalignment in training data can introduce noise and irrelevant patterns, which typically require more training data and
computational resources to overcome. By applying higher alignment thresholds, ZIP-FIT ensures
that only the most relevant and helpful examples are used for training. This targeted selection leads
to a more efficient learning process as evidenced by the sharper decline in cross-entropy loss for
higher alignment thresholds. Such efficiency is crucial in scenarios where computational resources
are limited or costly.
**Theoretical Implications: The observed trends underscore the theoretical implications of informa-**
tion theory in machine learning, where reducing the entropy or randomness in the input data directly
contributes to better model performance. This aligns with the concept that a cleaner, more relevant
dataset effectively reduces the hypothesis space that the model needs to explore during training.
**Practical Considerations: For practitioners, these results suggest that investing in better data cu-**
ration and alignment tools can significantly cut down the cost and time of model training without
compromising performance. It also highlights the potential pitfalls of using large, uncurated datasets
that might slow down the learning process or lead to poorer generalization on specific tasks.
**Future Directions: Further research could explore adaptive alignment thresholds based on real-**
time validation performance, potentially automating the selection process to optimize both speed
and accuracy during training.
-----
**378**
**379**
**380**
**381**
**382**
**383**
**384**
**385**
**386**
**387**
**388**
**389**
**390**
**391**
**392**
**393**
**394**
**395**
**396**
**397**
**398**
**399**
**400**
**401**
**402**
**403**
**404**
**405**
**406**
**407**
**408**
**409**
**410**
**411**
**412**
**413**
**414**
**415**
**416**
**417**
**418**
**419**
**420**
**421**
**422**
**423**
**424**
**425**
**426**
**427**
**428**
**429**
**430**
**431**
Figure 6: Selective data filtering with ZIP-FIT allows us to achieve better cross-entropy test loss
**faster than training on all the data, resulting in improved performance and efficiency. The x-**
axis represents the number of training tokens, while the y-axis shows the cross-entropy test loss. The
curves represent models fine-tuned (SFT) on datasets filtered by varying alignment thresholds (>
0.1, > 0.2, > 0.3). The dashed line indicates the baseline performance of the pretrained Mistral7B
model. Training on data filtered with higher alignment thresholds leads to superior performance,
demonstrating the effectiveness of removing misaligned data in fine-tuning.
These results further validate the empirical performance gains and computational efficiency achieved
by ZIP-FIT, as outlined in our contributions. By filtering out misaligned data, ZIP-FIT accelerates
fine-tuning and reduces the computational overhead, confirming its utility in low-resource settings.
7 RELATED WORKS
**Curating pre-training data for Language Models often involves using classifiers to filter high-**
quality data from large corpora like Common Crawl, as done for models like GPT-3 and PaLM2
(Brown et al., 2020; Google, 2023; Shao et al., 2024). While effective, this process requires significant computational resources and large volumes of curated data. In contrast, ZIP-FIT efficiently
selects relevant data without relying on external models, making it especially useful in data-scarce
environments.
**Deduplication techniques, such as SemDeDup (Abbas et al., 2023) and D4 (Tirumala et al., 2023)**
improve data efficiency by removing duplicate or semantically similar examples using embeddingbased clustering. However, these methods are computationally expensive and not tuned to the target
task. ZIP-FIT is embedding-free and task-aware, making it both scalable and more effective at
selecting relevant data.
**Mixture weights are essential when drawing from multiple domains, as they significantly influence**
the performance of language models (Du et al., 2022; Xie et al., 2023b). While DoReMi (Domain Reweighting with Minimax Optimization) (Xie et al., 2023a) proposes a robust domain-level
reweighting strategy suitable for diverse target distributions. DoReMi is not designed for examplelevel data selection, as it primarily focuses on domain-level reweighting. Adapting it to select individual data points for specific target distributions would require substantial modifications to its foundational algorithm. One possibility be to effectively transform each data point into a ’mini-domain,’
a process that would stray significantly from DoReMi’s original purpose and scope. Therefore,
we did not use DoReMi in our comparisons because it does not directly address the fine-grained
selection needs that ZIP-FIT fulfills.
**Autoformalization refers to the process of translating natural language mathematics into formal**
language (Wang et al., 2020; Wu et al., 2022), which is advantageous because formal proofs can be
verified for correctness. However, the ability of current models to autoformalize text is limited by
the scarcity of human-curated formal data. ZIP-FIT provides a framework for selecting the most
relevant data, ensuring that models are trained on aligned datasets that enhance their performance.
-----
**432**
**433**
**434**
**435**
**436**
**437**
**438**
**439**
**440**
**441**
**442**
**443**
**444**
**445**
**446**
**447**
**448**
**449**
**450**
**451**
**452**
**453**
**454**
**455**
**456**
**457**
**458**
**459**
**460**
**461**
**462**
**463**
**464**
**465**
**466**
**467**
**468**
**469**
**470**
**471**
**472**
**473**
**474**
**475**
**476**
**477**
**478**
**479**
**480**
**481**
**482**
**483**
**484**
**485**
**Key Takeaways:**
- Efficiency in Data Selection: ZIP-FIT utilizes gzip compression for alignment,
demonstrating significant efficiency in selecting domain-specific data, enhancing model fine-tuning.
- Resource Optimization: It outperforms traditional methods like DSIR and
D4 by speeding up training and reducing computational demands, beneficial
in resource-limited settings.
- Domain-Specific Improvements: Exhibits superior performance in tasks like
AutoFormalization and code generation, where precise data alignment is crucial.
- Practical Application: Effective in identifying and using the most relevant data
from mixed datasets, proving critical for achieving better domain-specific results.
8 LIMITATIONS
While ZIP-FIT provides a computationally efficient method for data selection, it has several limitations. First, the gzip compression-based alignment may not fully capture nuanced semantic relationships that dense representations can, potentially affecting its effectiveness for complex domains like
natural language understanding, where paraphrasing is important. Second, ZIP-FIT’s reliance on
gzip means that its performance could vary depending on the nature of the textual data, particularly
in highly diverse datasets where compression gains are less apparent.
9 DISCUSSION AND FUTURE WORK
ZIP-FIT introduces an efficient, embedding-free approach for data selection in language model
fine-tuning. By leveraging gzip compression to capture redudancies in data, ZIP-FIT enables the
alignment of large-scale datasets with a target domain without the computational burden of neural
embeddings. Our results show that using compression-based alignment leads to faster convergence
and lower cross-entropy loss compared to existing methods like DSIR and D4 (Tirumala et al., 2023;
Xie et al., 2023b).
However, this approach highlights the trade-off between simplicity and the ability to capture complex semantic relationships. While compression-based methods offer a lightweight alternative, they
might not fully replace embedding-based techniques for highly intricate domains, such as natural
language understanding or paraphrases. Nonetheless, ZIP-FIT’s promising results suggest that leveraging compression as a data selection tool can be highly effective, especially in resource-constrained
scenarios and economically crucial tasks like code generation, where gzip can leverage the syntactic
structure of the data.
Future work could explore hybrid models that combine the strengths of compression-based techniques with neural embeddings to further enhance data selection. Additionally, extending ZIP-FIT
to support more diverse data modalities and investigating its robustness across various domains
would provide a more comprehensive understanding of its capabilities and limitations. We plan for
future work to study its application to complex natural language-only tasks and mathematics, where
paraphrasing and semantics are important.
We also plan to explore the use of ZIP-FIT for synthetic data generation. While generating synthetic
data is straightforward, selecting high-value samples for training presents challenges, especially
when managing limited token budgets Villalobos et al. (2024). Autoformalization is a fantastic task
for this exploration, as it inherently has a limited number of tokens, thus simulating the critical
challenge of token scarcity. Additionally, studying synthetic data selection is crucial for developing
self-improving agents that can avoid model collapse (Gerstgrasser et al., 2024) by ensuring highquality data accumulation.
Furthermore, diversity was identified as an important meta-data property that can influence model
performance (Miranda et al., 2024). Therefore, we aim to address this in future work by either: (1)
-----
**486**
**487**
**488**
**489**
**490**
**491**
**492**
**493**
**494**
**495**
**496**
**497**
**498**
**499**
**500**
**501**
**502**
**503**
**504**
**505**
**506**
**507**
**508**
**509**
**510**
**511**
**512**
**513**
**514**
**515**
**516**
**517**
**518**
**519**
**520**
**521**
**522**
**523**
**524**
**525**
**526**
**527**
**528**
**529**
**530**
**531**
**532**
**533**
**534**
**535**
**536**
**537**
**538**
**539**
developing an algorithm that balances diversity with alignment in data selection, or (2) creating a
metric that incorporates diversity as part of its evaluation process.
10 CONCLUSION
In this work, we introduced ZIP-FIT, an efficient and scalable data selection method that leverages
gzip-based compression to enhance the downstream performance of language models for domainspecific tasks. Our experiments demonstrate that ZIP-FIT not only accelerates the fine-tuning process but also significantly improves downstream performance by aligning training data more closely
with target tasks. By comparing against established methods like DSIR and D4, ZIP-FIT proved
superior in selecting highly-aligned data, especially in complex tasks such as Autoformalization
and code generation. This methodology sets a new standard for resource-efficient and effective
data selection for model training, providing a step in understanding the choice of training data for
downstream transfer in LMs.
-----
**540**
**541**
**542**
**543**
**544**
**545**
**546**
**547**
**548**
**549**
**550**
**551**
**552**
**553**
**554**
**555**
**556**
**557**
**558**
**559**
**560**
**561**
**562**
**563**
**564**
**565**
**566**
**567**
**568**
**569**
**570**
**571**
**572**
**573**
**574**
**575**
**576**
**577**
**578**
**579**
**580**
**581**
**582**
**583**
**584**
**585**
**586**
**587**
**588**
**589**
**590**
**591**
**592**
**593**
REFERENCES
Amro Abbas, Kushal Tirumala, D´aniel Simig, Surya Ganguli, and Ari S. Morcos. Semdedup: Data[efficient learning at web-scale through semantic deduplication, 2023. URL https://arxiv.](https://arxiv.org/abs/2303.09540)
[org/abs/2303.09540.](https://arxiv.org/abs/2303.09540)
Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. Enriching word vectors
[with subword information, 2017. URL https://arxiv.org/abs/1607.04606.](https://arxiv.org/abs/1607.04606)
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal,
Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M.
Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz
Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec
Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners, 2020. URL
[https://arxiv.org/abs/2005.14165.](https://arxiv.org/abs/2005.14165)
Gr´egoire Del´etang, Anian Ruoss, Paul-Ambroise Duquenne, Elliot Catt, Tim Genewein, Christopher Mattern, Jordi Grau-Moya, Li Kevin Wenliang, Matthew Aitchison, Laurent Orseau, Marcus
[Hutter, and Joel Veness. Language modeling is compression, 2024. URL https://arxiv.](https://arxiv.org/abs/2309.10668)
[org/abs/2309.10668.](https://arxiv.org/abs/2309.10668)
Nan Du, Yanping Huang, Andrew M. Dai, Simon Tong, Dmitry Lepikhin, Yuanzhong Xu, Maxim
Krikun, Yanqi Zhou, Adams Wei Yu, Orhan Firat, Barret Zoph, Liam Fedus, Maarten Bosma,
Zongwei Zhou, Tao Wang, Yu Emma Wang, Kellie Webster, Marie Pellat, Kevin Robinson, Kathleen Meier-Hellstern, Toju Duke, Lucas Dixon, Kun Zhang, Quoc V Le, Yonghui Wu, Zhifeng
Chen, and Claire Cui. Glam: Efficient scaling of language models with mixture-of-experts, 2022.
[URL https://arxiv.org/abs/2112.06905.](https://arxiv.org/abs/2112.06905)
Matthias Gerstgrasser, Rylan Schaeffer, Apratim Dey, Rafael Rafailov, Henry Sleight, John Hughes,
Tomasz Korbak, Rajashree Agrawal, Dhruv Pai, Andrey Gromov, Daniel A. Roberts, Diyi Yang,
David L. Donoho, and Sanmi Koyejo. Is model collapse inevitable? breaking the curse of re[cursion by accumulating real and synthetic data, 2024. URL https://arxiv.org/abs/](https://arxiv.org/abs/2404.01413)
[2404.01413.](https://arxiv.org/abs/2404.01413)
[Google. Palm 2 technical report, 2023. URL https://arxiv.org/abs/2305.10403.](https://arxiv.org/abs/2305.10403)
Suchin Gururangan, Ana Marasovi´c, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey,
and Noah A. Smith. Don’t stop pretraining: Adapt language models to domains and tasks, 2020.
[URL https://arxiv.org/abs/2004.10964.](https://arxiv.org/abs/2004.10964)
Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza
Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, Tom Hennigan, Eric Noland, Katie Millican, George van den Driessche, Bogdan Damoc, Aurelia Guy,
Simon Osindero, Karen Simonyan, Erich Elsen, Jack W. Rae, Oriol Vinyals, and Laurent Sifre.
[Training compute-optimal large language models, 2022. URL https://arxiv.org/abs/](https://arxiv.org/abs/2203.15556)
[2203.15556.](https://arxiv.org/abs/2203.15556)
Yuzhen Huang, Jinghan Zhang, Zifei Shan, and Junxian He. Compression represents intelligence
[linearly, 2024. URL https://arxiv.org/abs/2404.09937.](https://arxiv.org/abs/2404.09937)
Zhiying Jiang, Matthew Yang, Mikhail Tsirlin, Raphael Tang, Yiqin Dai, and Jimmy Lin. “lowresource” text classification: A parameter-free classification method with compressors. In
Anna Rogers, Jordan Boyd-Graber, and Naoaki Okazaki (eds.), Findings of the Association for
_Computational Linguistics: ACL 2023, pp. 6810–6828, Toronto, Canada, July 2023. Associ-_
[ation for Computational Linguistics. doi: 10.18653/v1/2023.findings-acl.426. URL https:](https://aclanthology.org/2023.findings-acl.426)
[//aclanthology.org/2023.findings-acl.426.](https://aclanthology.org/2023.findings-acl.426)
Brando Miranda, Alycia Lee, Sudharsan Sundar, Allison Casasola, and Sanmi Koyejo. Beyond
scale: The diversity coefficient as a data quality metric for variability in natural language data,
[2024. URL https://arxiv.org/abs/2306.13840.](https://arxiv.org/abs/2306.13840)
-----
**594**
**595**
**596**
**597**
**598**
**599**
**600**
**601**
**602**
**603**
**604**
**605**
**606**
**607**
**608**
**609**
**610**
**611**
**612**
**613**
**614**
**615**
**616**
**617**
**618**
**619**
**620**
**621**
**622**
**623**
**624**
**625**
**626**
**627**
**628**
**629**
**630**
**631**
**632**
**633**
**634**
**635**
**636**
**637**
**638**
**639**
**640**
**641**
**642**
**643**
**644**
**645**
**646**
**647**
Leonardo De Moura, Soonho Kong, Jeremy Avigad, Floris Van Doorn, and Jakob von Raumer. The
lean theorem prover (system description). In Automated Deduction - CADE-25: 25th Interna_tional Conference on Automated Deduction, Berlin, Germany, August 1-7, 2015, Proceedings,_
pp. 378–388. Springer International Publishing, 2015. doi: 10.1007/978-3-319-21401-6\ 26.
[Niklas Muennighoff. Sgpt: Gpt sentence embeddings for semantic search, 2022. URL https:](https://arxiv.org/abs/2202.08904)
[//arxiv.org/abs/2202.08904.](https://arxiv.org/abs/2202.08904)
Keiran Paster, Marco Dos Santos, Zhangir Azerbayev, and Jimmy Ba. Openwebmath: An open
dataset of high-quality mathematical web text, 2023. [URL https://arxiv.org/abs/](https://arxiv.org/abs/2310.06786)
[2310.06786.](https://arxiv.org/abs/2310.06786)
Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang,
Mingchuan Zhang, Y. K. Li, Y. Wu, and Daya Guo. Deepseekmath: Pushing the limits of mathe[matical reasoning in open language models, 2024. URL https://arxiv.org/abs/2402.](https://arxiv.org/abs/2402.03300)
[03300.](https://arxiv.org/abs/2402.03300)
Kushal Tirumala, Daniel Simig, Armen Aghajanyan, and Ari S. Morcos. D4: Improving llm pre[training via document de-duplication and diversification, 2023. URL https://arxiv.org/](https://arxiv.org/abs/2308.12284)
[abs/2308.12284.](https://arxiv.org/abs/2308.12284)
Pablo Villalobos, Anson Ho, Jaime Sevilla, Tamay Besiroglu, Lennart Heim, and Marius Hobbhahn.
Will we run out of data? limits of llm scaling based on human-generated data, 2024. URL
[https://arxiv.org/abs/2211.04325.](https://arxiv.org/abs/2211.04325)
Qingxiang Wang, Chad Brown, Cezary Kaliszyk, and Josef Urban. Exploration of neural machine
translation in autoformalization of mathematics in mizar. In Proceedings of the 9th ACM SIG_PLAN International Conference on Certified Programs and Proofs, volume 5 of POPL ’20, pp._
[85–98. ACM, January 2020. doi: 10.1145/3372885.3373827. URL http://dx.doi.org/](http://dx.doi.org/10.1145/3372885.3373827)
[10.1145/3372885.3373827.](http://dx.doi.org/10.1145/3372885.3373827)
Yuhuai Wu, Albert Q. Jiang, Wenda Li, Markus N. Rabe, Charles Staats, Mateja Jamnik, and
Christian Szegedy. Autoformalization with large language models, 2022. [URL https:](https://arxiv.org/abs/2205.12615)
[//arxiv.org/abs/2205.12615.](https://arxiv.org/abs/2205.12615)
Sang Michael Xie, Hieu Pham, Xuanyi Dong, Nan Du, Hanxiao Liu, Yifeng Lu, Percy Liang,
Quoc V. Le, Tengyu Ma, and Adams Wei Yu. Doremi: Optimizing data mixtures speeds up
[language model pretraining, 2023a. URL https://arxiv.org/abs/2305.10429.](https://arxiv.org/abs/2305.10429)
Sang Michael Xie, Shibani Santurkar, Tengyu Ma, and Percy Liang. Data selection for language
[models via importance resampling, 2023b. URL https://arxiv.org/abs/2302.03169.](https://arxiv.org/abs/2302.03169)
Yong Xie, Karan Aggarwal, and Aitzaz Ahmad. Efficient continual pre-training for building domain
[specific large language models, 2023c. URL https://arxiv.org/abs/2311.08545.](https://arxiv.org/abs/2311.08545)
A APPENDIX
A.1 GZIP COMPRESSION DETAILS
GZIP is a lossless data compression algorithm that combines two primary techniques: LZ77 compression and Huffman coding. Here, we provide additional technical details on how GZIP works.
**LZ77 Compression:** LZ77 works by identifying repeated substrings in the input text and replacing
them with backward references. Mathematically, LZ77 can be described as follows:
Given an input sequence S = s1, s2, . . ., sn, the algorithm searches for the longest prefix of the
remaining sequence S[′] = si, si+1, . . ., sn that matches a substring within a predefined window of
previous characters. If a match is found, it is replaced by a tuple (d, l, c), where:
- d is the distance from the current position to the start of the matching substring,
- l is the length of the matching substring, and
-----
**648**
**649**
**650**
**651**
**652**
**653**
**654**
**655**
**656**
**657**
**658**
**659**
**660**
**661**
**662**
**663**
**664**
**665**
**666**
**667**
**668**
**669**
**670**
**671**
**672**
**673**
**674**
**675**
**676**
**677**
**678**
**679**
**680**
**681**
**682**
**683**
**684**
**685**
**686**
**687**
**688**
**689**
**690**
**691**
**692**
**693**
**694**
**695**
**696**
**697**
**698**
**699**
**700**
**701**
- c is the character following the match (if any).
For example, the substring si, si+1, . . ., si+l 1 can be replaced by the tuple (d, l, c), thereby reduc_−_
ing redundancy in the data.
**Huffman Coding:** After applying LZ77, GZIP employs Huffman coding to further reduce the size
of the compressed data. Huffman coding assigns variable-length codes to symbols based on their
frequency of occurrence, with shorter codes assigned to more frequent symbols.
The expected length L(X) of the Huffman code for a sequence of symbols X = x1, x2, . . ., xn is
calculated as:
_p(xi)_ len(C(xi)),
_·_
_i=1_
X
_L(X) =_
where:
- p(xi) is the probability of symbol xi,
- len(C(xi)) is the length of the Huffman code for xi.
This further minimizes the size of the compressed data by leveraging the statistical properties of the
input.
**Combined GZIP Compression:** The total compressed size C(S) after applying both LZ77 and
Huffman coding can be approximated as the sum of the lengths of the backward references and the
Huffman-coded symbols:
len(d, l, c) +
(d,l,c)
X
len(C(xi)).
_i=1_
X
_C(S) =_
**Normalized Compression Distance (NCD):** GZIP’s effectiveness in data selection stems from
its ability to measure the alignment between two sequences A and B based on how efficiently they
compress together. The Normalized Compression Distance (NCD) is given by:
_NCD(A, B) =_ _[C][(][A][ +][ B][)][ −]_ [min(][C][(][A][)][, C][(][B][))] _,_
max(C(A), C(B))
where C(A) and C(B) are the compressed lengths of sequences A and B, and C(A + B) is the
length of the compressed concatenation of both sequences. A lower NCD indicates greater alignment
between the sequences.
A.2 COMPOSITION OF THE SOURCE DATASET FOR AUTOFORMALIZATION
The source dataset for the AutoFormalization task was compiled from a variety of datasets to ensure
a diverse mix of mathematical, general textual, and code-related content. Below are the details of
the datasets included:
- UDACA/AF: 4,300 samples from informal formalization statements.
- C4: 10,000 samples from the clean crawl of the internet, ensuring a broad linguistic variety.
- LeanDojo: 10,000 samples from task-oriented proofs and tactics.
- LeanDojo Informalized: 10,000 samples combining traced tactics with informal descriptions, aiming to bridge formal reasoning and natural language.
- UDACA/AF-split: 10,000 samples, a variant of the UDACA/AF dataset with split annotations.
- WikiText: 10,000 samples from a collection of professionally curated articles, providing a
rich linguistic framework.
- Algebraic Stack: Samples from various subsets of mathematical and programming languages, capped at 10,000 samples per subset or fewer if the total subset size was under this
threshold.
-----
**702**
**703**
**704**
**705**
**706**
**707**
**708**
**709**
**710**
**711**
**712**
**713**
**714**
**715**
**716**
**717**
**718**
**719**
**720**
**721**
**722**
**723**
**724**
**725**
**726**
**727**
**728**
**729**
**730**
**731**
**732**
**733**
**734**
**735**
**736**
**737**
**738**
**739**
**740**
**741**
**742**
**743**
**744**
**745**
**746**
**747**
**748**
**749**
**750**
**751**
**752**
**753**
**754**
**755**
Each dataset was selected to complement the others by covering different aspects of language use,
from technical to informal, ensuring the model’s exposure to a wide range of linguistic structures
and contents. The total dataset size aggregated to approximately 185,000 sequences, which were
then subjected to alignment scoring and further processing for model training.
A.3 COMPOSITION OF THE SOURCE DATASET FOR CODE GENERATION
The source dataset for the Code Generation task was assembled from various data sources to provide a diverse range of coding and natural language contexts. Below are the details of the datasets
included:
- MBPP (Google Research): A total of 964 samples focusing on Python coding challenges.
- Python Code Instructions (18k Alpaca): 5,000 sequences providing natural language
prompts for Python code, fostering a practical approach to code generation.
- Python Docstrings (Calum/The Stack): 5,000 sequences each of Python function docstrings integrating detailed natural language documentation of python functions.
- Python Docstrings (Calum/The Stack): 5,000 sequences each of Python function code
bodies, integrating raw python code without documentation.
- C4 (AllenAI): 10,000 samples from a clean web crawl.
- WikiText: 10,000 samples from a collection of curated articles, providing rich natural
language training material.
- Algebraic Stack: A selection of sequences from various programming language subsets,
each capped at 10,000 samples or the total subset size if less than this threshold.
This combination of datasets was specifically chosen to challenge our methods ’s ability to choose
syntactically correct and functionally accurate Python code, while also responding appropriately to
natural language prompts.
B APPENDIX
B.1 HYPERPARAMETERS FOR MODEL FINE-TUNING
All models in our experiments were fine-tuned with the following unified setup, aimed at ensuring a
consistent evaluation across different models and data selection strategies.
**Models and Tokenizer:** The fine-tuning was performed using the following models:
- InterLM-Math-Plus-1.8B
- Gemma2-2B
- Mistral7B
**Training Settings:** The key hyperparameters used across all models are as follows:
- Block Size: 1024 tokens
- Learning Rate: 7.5 × 10[−][7]
- Batch Size: 4 (per device)
- Number of Epochs: 1
- Weight Decay: 0.01
- Maximum Gradient Norm: 1.0
Training was facilitated using the Trainer class from Hugging Face’s Transformers library, with
the Accelerate library handling model parallelism to efficiently utilize available computational resources.
-----
**756**
**757**
**758**
**759**
**760**
**761**
**762**
**763**
**764**
**765**
**766**
**767**
**768**
**769**
**770**
**771**
**772**
**773**
**774**
**775**
**776**
**777**
**778**
**779**
**780**
**781**
**782**
**783**
**784**
**785**
**786**
**787**
**788**
**789**
**790**
**791**
**792**
**793**
**794**
**795**
**796**
**797**
**798**
**799**
**800**
**801**
**802**
**803**
**804**
**805**
**806**
**807**
**808**
**809**
**Evaluation Metrics:** For model evaluation, we employed:
- Cross-Entropy Loss at the end of training to measure the effectiveness of the fine-tuning.
The fine-tuning was conducted under controlled conditions to ensure fair comparison between data
selected by ZIP-FIT, DSIR, and manual curation methods. The effectiveness of each method was
assessed based on how the models performed on the ProofNet and HumanEval.
**Data Handling and Logging:** All logs, model checkpoints, and tokenizer settings were systematically saved in designated directories for thorough analysis post-experiment
This comprehensive and standardized approach to fine-tuning ensures that our experimental results
are robust, reproducible, and transparent, providing clear insights into the effectiveness of the data
selection methodologies employed in our study.
C MORE EXPERIMENTS: DATA SELECTION FOR EFFICIENT FINE-TUNING
USING ZIP-FIT
Figure 7: ZIP-FIT consistently achieves a lower test loss at a faster rate compared to D4 and
**DSIR for Autoformalization. The plots show the cross-entropy test loss against the number of**
training tokens for three models (InterLM-Math-Plus-1.8B, Gemma2-2B, and Mistral7B) across
various token selection sizes. ZIP-FIT (blue line) consistently surpasses both DSIR (green line)
and D4 (purple line) across all model and token size configurations, emphasizing its superior data
processing efficiency. The percentage labels in each plot denote the relative speedup of ZIP-FIT
over DSIR in attaining the lowest cross-entropy loss, further underscoring the method’s scalability
and adaptability for domain-specific fine-tuning.
-----
| [] | 2024-10-04T00:00:00 | ICLR 2025 Submission | false | 0 | 0 | [
"Lean"
] | https://openreview.net/forum?id=4JBEpP6eRS | https://arxiv.org/abs/2410.18194 | null |
\textit{SKIntern}: Internalizing Symbolic Knowledge for Distilling Better CoT Capabilities into Small Language Models | Small Language Models (SLMs) are attracting attention due to the high computational demands and privacy concerns of Large Language Models (LLMs). Some studies fine-tune SLMs using Chains of Thought (CoT) data distilled from LLMs, aiming to enhance their reasoning ability. Furthermore, Some CoT distillation methods introduce external symbolic knowledge into the generation process to improve the limited knowledge memory, reasoning ability and out-of-domain (OOD) generalization of SLMs. However, the introduction of symbolic knowledge increases computational overhead and introduces potential noise. In this paper, we introduce $\textit{SKIntern}$, an innovative approach that empowers SLMs to internalize symbolic knowledge and few-shot examples gradually through a progressive fine-tuning process, guided by a predefined linear decay schedule under curriculum learning. By efficiently internalizing knowledge, $\textit{SKIntern}$ reduces computational overhead and speeds up the reasoning process by focusing solely on the question during inference. It outperforms state-of-the-art baselines by over 5\%, while reducing inference costs (measured in FLOPs) by up to $4\times$ across a wide range of SLMs in both in-domain (ID) and out-of-domain (OOD) tasks. Our code will be available at \url{https://github.com/Xnhyacinth/SKIntern}. | null | ## SKIntern: Internalizing Symbolic Knowledge for Distilling Better CoT Capabilities into Small Language Models
**Huanxuan Liao[1][,][2], Shizhu He[1][,][2], Yupu Hao[1][,][2], Xiang Li[1][,][2], Yuanzhe Zhang[1][,][2],**
**Kang Liu[1][,][2], Jun Zhao[1][,][2]**
1
The Laboratory of Cognition and Decision Intelligence for Complex Systems,
Institute of Automation, Chinese Academy of Sciences, Beijing, China
2
School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
{liaohuanxuan2023, haoyupu2023, lixiang2022}@ia.ac.cn {shizhu.he, kliu, jzhao}@nlpr.ia.ac.cn
**Abstract** (i) Std-CoT
Small Language Models (SLMs) are attracting attention due to the high computational
demands and privacy concerns of Large Language Models (LLMs). Some studies fine-tune
SLMs using Chains of Thought (CoT) data distilled from LLMs, aiming to enhance their reasoning ability. Furthermore, Some CoT distillation methods introduce external symbolic
knowledge into the generation process to improve the limited knowledge memory, reasoning ability and out-of-domain (OOD) generalization of SLMs. However, the introduction of
symbolic knowledge increases computational
overhead and introduces potential noise. In
this paper, we introduce SKIntern, an innovative approach that empowers SLMs to internalize symbolic knowledge and few-shot examples gradually through a progressive finetuning process, guided by a predefined linear
decay schedule under curriculum learning. By
efficiently internalizing knowledge, SKIntern
reduces computational overhead and speeds up
the reasoning process by focusing solely on
the question during inference. It outperforms
state-of-the-art baselines by over 5%, while reducing inference costs (measured in FLOPs)
by up to 4× across a wide range of SLMs in
both in-domain (ID) and out-of-domain (OOD)
[tasks. Our code will be available at https:](https://github.com/Xnhyacinth/SKIntern)
[//github.com/Xnhyacinth/SKIntern.](https://github.com/Xnhyacinth/SKIntern)
**Performance Efficiency**
Generate Fine-tune
(i) Std-CoT
Q R + A SLM
Generate
Fine-tune LLM
(ii) KARD Retrieve Q + K
KB R + A Trainable
Stage 1 Fine-tune Frozen
(iii) CasCoD Generate Fine-tuneQ R Rationale (R)
Stage 2
Q + R A Knowledge (K)
Q + K R + A
Generate Fine-tune and
(iv) SKIntern Internalize
Progressively
Q R + A
Figure 1: Knowledge utilization comparisons of SKIn_tern and other typical CoT distillation methods. (i)_
Std-CoT: SLM is fine-tuned to generate the rationale
and answer for the question (Q -> R + A). (ii) KARD:
Fine-tune the SLM to generate the rationale and answer
based on the question and the retrieved symbolic knowledge (Q + K -> R + A). (iii): CasCoD: Decompose the
single CoT learning step into two comprehensive learning steps of rationale generation (Q -> R) and rationale
utilization (Q + R -> A). (iv): SKIntern: Like human interns, SLMs gradually absorb and internalize symbolic
knowledge provided by LLMs during the progressive
fine-tuning, thereby achieving efficient (Q -> R + A)
and effective reasoning (modeling K in parameters).
need for Small Language Models (SLMs) (Xu et al.,
2024). However, these advanced reasoning and
knowledge capabilities are typically modeled in
larger models (≥13B), making it challenging to
replicate in SLMs (≤7B) (Kaplan et al., 2020).
To improve the reasoning ability of SLMs, existing works (Fu et al., 2023; Li et al., 2024b) aim
to distill the reasoning ability of LLMs into SLMs
by fine-tuning SLMs with high-quality rationales
obtained from LLMs, known as standard CoTs distillation (Std-CoT) (Magister et al., 2023). However, due to the limited parameter size of SLMs,
they cannot effectively memorize all knowledge
and model reasoning ability, making it difficult to
generalize to out-of-domain (OOD) tasks.
Recently, several methods have been proposed to
further improve the knowledge memory and reason
**1** **Introduction**
Large Language Models (LLMs) (Touvron et al.,
2023; Yang et al., 2024) have greatly excelled at
various complex reasoning tasks such as mathematical (Li et al., 2024a), symbolic (Suzgun et al.,
2022) and logical (Dave et al., 2024) reasoning,
by applying Chains of Thought (CoT) prompting
(Wei et al., 2022) and In-Context Learning (ICL)
(Ye et al., 2023; Shum et al., 2023). Nonetheless,
the high computational expenses and data privacy
issues associated with LLMs have highlighted the
-----
ing ability of SLMs. For example, as illustrated in
Figure 1, KARD (Kang et al., 2023) uses external
knowledge bases to enhance the memory capacity
of SLMs, while CasCoD (Dai et al., 2024) employs cascading decomposition to support gradual
learning. However, those methods lead to two challenges: 1) Redundant and noisy symbolic knowl**edge degrades the effect of CoT distillation. Doc-**
ument retrieval based on similarity frequently results in repetitive and trivial content, complicating
the model’s ability to extract key information (Liu
et al., 2023). Additionally, retrieved documents
often contain irrelevant or misleading information,
introducing noise that diminishes the model’s performance. 2) Long input and multi-stage gen**eration reduce the inference efficiency of CoT**
**distillation. Processing additional documents and**
rationales imposes significant memory and computational burdens, and the complex inference process complicates deployment and implementation,
reducing overall efficiency. Therefore, a key challenge of CoT distillation is: Can we effectively and
**_efficiently transfer the rich knowledge and rea-_**
**_soning ability of LLMs through CoT distillation_**
**_while minimizing computational overhead?_**
To resolve the above challenge, we examine
the human learning process and draw analogies to
model fine-tuning. For instance, at first, an intern
typically needs detailed explanations, examples,
and documentation to learn new skills (Zou et al.,
2024). However, once they have internalized this
knowledge and mastered the required skills, such
extensive information is no longer needed. Therefore, we believe that if SLMs are provided with
detailed guidance and symbolic knowledge while
learning rationales from LLMs, their learning outcomes can be greatly enhanced. By gradually internalizing this knowledge into their parameters,
SLMs can independently develop efficient reasoning abilities, eliminating the need for additional
document retrieval or multi-stage generation.
To perform an efficient and effective CoT distillation, we introduce a novel approach SKIntern that
internalizes the symbolic knowledge during model
fine-tuning and enables efficient inference without
additional context. Specifically, our method comprises two key steps. Initially, for each training
instance, LLMs generate rationales and symbolic
knowledge (such as the learning summaries and
supplementary materials) and we select the most
relevant ones using cosine similarity. Secondly, we
gradually perform token-level symbolic knowledge
compression and instance-level example pruning
based on a predefined linear decay schedule. This
refined information is then used to fine-tune the
SLM to generate the rationale from the LLMs and
the answer. As the schedule progresses, both symbolic knowledge and examples are internalized into
the model’s parameters, enabling effective reasoning based solely on the questions during inference.
We evaluate SKIntern on open-source models like TinyLLaMA (Zhang et al., 2024) and
LLaMA2-7B (Touvron et al., 2023) across factual, mathematical, and general reasoning benchmarks. By internalizing symbolic knowledge into
parameters and addressing questions exclusively
during inference, SKIntern surpasses strong baselines in both ID and OOD tasks while significantly
reducing computational requirements (measured in
FLOPs). This supports our hypothesis that internalizing symbolic knowledge can significantly reduce
inference costs, thereby avoiding explicit processing during inference. Additionally, we find that the
performance of SKIntern can be further enhanced
by incorporating few-shot examples into parameters with minimal additional computation. These
improvements suggest that our method balances
efficiency and effectiveness, making it highly suitable for optimizing SLM inference performance in
cost-sensitive scenarios. In conclusion, the contributions of this paper are summarized as follows:
- We propose a novel CoT distillation method
_SKIntern designed to emulate the incremental_
learning process of interns, gradually learning
and mastering knowledge and skills.
- We progressively internalize the symbolic
knowledge generated by the LLM and the
selected examples into parameters, thereby
achieving effective and efficient inference
without the need for additional information.
- We conducted extensive experiments on 7 reasoning benchmarks. SKIntern outperforms
robust baselines by 5% in both ID and OOD
tasks, while reducing inference costs by up to
4× across a broad spectrum of SLMs.
**2** **Related Work**
**CoT Distillation transfers the reasoning ability of**
LLMs to SLMs, where reasoning ability is an emergent property that enables LLMs to excel in reasoning tasks through Chains of Thought (CoT) prompting (e.g., Let’s think step-by-step) (Wei et al., 2022;
-----
Ho et al., 2022). Recent works (Magister et al.,
2023; Fu et al., 2023) show that this CoT inference
mechanism can be used for distillation: fine-tuning
a smaller student model using CoT sequences extracted from a larger teacher model significantly
boosts performance. Further studies (Hsieh et al.,
2023; Li et al., 2024b) have proposed treating the
learning of rationales and answers as distinct optimization objectives. However, these approaches
often overlook the limited memory and reasoning
ability of SLMs, making it difficult to generalize
to OOD tasks. KARD (Kang et al., 2023) boosts
SLMs’ memory by retrieving external knowledge,
while CasCoD (Dai et al., 2024) refines rationale
perception through cascading decomposition learning. However, both methods require processing
more tokens (document retrieval and multi-stage
generation), which introduces additional complexity and uncontrollability in reasoning tasks. Our
proposed method mirrors how interns learn a new
task by first providing full symbolic knowledge and
examples and gradually internalizing them into the
parameters, achieving effective inference without
additional information.
**Prompt Compression condenses lengthy prompts,**
retaining only essential information while reducing length. This process can be divided into
three main methods: Information entropy-based
techniques (Li et al., 2023; Jiang et al., 2023)
use a small language model to calculate the selfinformation or error-proneness of tokens, removing those with lower error-proneness; Soft prompts
methods (Chevalier et al., 2023; Mu et al., 2023)
require fine-tuning LLM parameters to use learnable tokens for condensing prompts; Interpretable
summaries methods (Xu et al., 2023; Pan et al.,
2024) extract data from the LLM to train models
for generating more interpretable text summaries.
A method analogous to ours is PromptIntern (Zou
et al., 2024), which achieves prompt compression
through progressive fine-tuning. We internalize
knowledge and examples into the parameters by
gradually pruning the prompt during training, allowing the prompt to be discarded during inference.
**3** **Methodology**
In this section, we introduce the detailed procedures of SKIntern. As illustrated in Figure 2, SKIn_tern starts with the full knowledge and examples,_
and progressively prunes tokens to gradually internalize them into the model’s parameters, reducing
the prompt length and the number of computations
towards the model. Below, we first describe how
to extract CoT and symbolic knowledge from the
teacher LLM in § 3.1. Then we introduce techniques for symbolic knowledge compression and
examples pruning to convert them into parameters
in § 3.2. Finally, we present a customized progressive fine-tuning pipeline for SKIntern in § 3.3.
Note, SKIntern achieves great results without additional knowledge and examples in input compared
with Std-Cot during inference, merely depending
on the knowledge stored in the parameters.
**3.1** **Rationale and Knowledge Generation**
**Rationale Generation.** In our problem setup,
we assume a given training dataset Dtrain =
_{(xi, yi)}i[n]=1_ [for the target task, where][ x][i][ is the]
input sequence (question in QA) and yi is the label
(answer in QA). LLMs can generate high-quality
rationales, which is known as the emergent ability (Ho et al., 2022). Our objective is to transfer this capability to SLMs through CoT distillation. Firstly, we leverage the CoT prompting
(Wei et al., 2022) to guide the teacher LLM in generating proper l rationales for each training data
point: rij = LLM(pc, xi, yi) where r are generated rationales, j ∈{1, ..., l} and pc is the prompt
which is shown in Appendix D.1. To maintain highquality CoT data, we filter out reasoning processes
that do not yield correct results, retaining only the
distilled CoT sequences that lead to accurate outcomes as the training data (Hsieh et al., 2023).
**Symbolic Knowledge Generation. Rationales of-**
fer insights into the logic behind answers, which is
crucial for SLMs to respond more precisely. However, SLMs with limited parameters may struggle
to retain all training data and complex reasoning
capabilities, which can affect the quality of rationale generation (Kang et al., 2023). Furthermore,
this single learning might lead SLMs to focus on
directly answering questions after reading, potentially impairing their ability to generalize in reasoning (Dai et al., 2024). Hence, it is imperative
to present the SLM with knowledge in the initial
stages of learning to facilitate its understanding.
We use prompt pk which is in the Appendix
D.2 to enable teacher LLM to generate learning
summaries k[m] that incorporate thinking processes
and supplemental knowledge k[p], collectively referred to as symbolic knowledge k. Formally, the
teacher LLM generate m knowledge using the
question xi, the rationale ri and the answer yi:
-----
|T2|T3|T4|T5|
|---|---|---|---|
|T2|T3|T4|T5|
|---|---|---|---|
|T2|T3|T4|
|---|---|---|
Schedule SFull Knowledge with N tokens0-T θ0 SLM LLM Question Examples LLMLingua2
Full Examples with K shots Token in Knowledge Token in Knowledge Example Example
(Not Pruned) (Pruned) (Not Pruned) (Pruned)
E/T Epochs **Rationale and Knowledge Generation**
Generate Generate
θ1 Rationale Knowledge
_Prune Knowledge_
_Prune Examples_ Previous Schedule
…S1 … Schedule St
_Prune Knowledge_ T1 T2 T3 T4 T5 T6 T7 E1 E2 E3 E4 E5 E6 E7
_Prune Examples_
Keep St tokens Random Select Keep St shots
St
E/T Epochs T1 T2 T3 T4 T5 T6 T7 E1 E2 E3 E4 E5 E6 E7
T Schedule Steps E Epochs
θt
… …
_Prune Knowledge_ T1 T3 T5 E2 E6 E7
_Prune Examples_ Fine-tune
ST None
E/T Epochs T1 T2 T3 T4 T5 T6 T7 E1 E2 E3 E4 E5 E6 E7
Following Schedule
θT
**(a) SKIntern framework** **(b) Schedule-wise Fine-tune**
Figure 2: Overview of the SKIntern framework. SKIntern starts with full symbolic knowledge and examples,
and progressively prunes them to gradually internalize knowledge, reducing the prompt length and the number of
computations towards the SLM. Based on schedule S, we perform effective knowledge compression and example
pruning before fine-tuning the SLM to generate rationales and answers. Gradual fine-tuning makes SLMs internalize
knowledge and examples into parameters, thereby enhancing performance without increasing computational cost.
**_kij = LLM(pk, xi, yi, ri), where j_** 1, ..., m .
_∈{_ _}_
A rationale typically addresses a specific question,
whereas knowledge generally offers broader explanations, methods and outlines.
**3.2** **Progressive Internalization**
Before this work, knowledge augmentation has
been successfully applied to optimize SLM inference (Kang et al., 2023). However, these methods
necessitate processing full knowledge during both
training and inference phases, significantly increasing computation overhead. Consequently, they are
unsuitable for scenarios with limited computational
resources. In contrast, by pruning the number of
tokens gradually during the training phase, SKIntern processes only the question during inference
without requiring additional symbolic knowledge.
We implement a predefined schedule S to regulate the pruning rate of knowledge and examples.
At each step, the pruned symbolic knowledge and
few-shot examples are appended to the question,
fine-tuning the SLM over E/T epochs, where the
total training spans E epochs. As shown in Figure
2 (a), with T total schedule steps, the value of S
progressively decreases from 1 to 0. As the compression rate increases and fine-tuning progresses,
the knowledge in the input gradually reduces to 0,
leading to the internalization of knowledge into the
model’s parameters.
**Symbolic Knowledge Compression. Inspired by**
prompt compression works (Pan et al., 2024), we
aim to gradually increase the compression rate to
reduce the symbolic knowledge at the token-level
determined by St at t-th step and internalize it into
the parameters, which can be expressed as:
**_k[t]i_** [=][ LLMLingua2][(][k][i][,][ S][t][)] (1)
where LLMLingua2 (Pan et al., 2024) is a taskagnostic prompt compression method that distills
knowledge from the LLM and fine-tunes the encoder to compress prompts without losing key information, k[t] is the compressed symbolic knowledge at t-th step, varying at different schedule _t._
_S_
**Example Pruning. During inference, incorporat-**
ing few-shot examples can significantly enhance
model performance, and incorporating these exam
-----
**4** **Experiment**
In this section, we conduct extensive experiments
and comprehensive analysis to evaluate the effectiveness of SKIntern on both in-domain (ID) and
out-of-domain (OOD) datasets.
**4.1** **Datasets**
Following Ying et al. (2024), we focus on three
practical abilities: factual, mathematical, and general reasoning. For each ability, we select a relevant public dataset as the ID dataset, integrate
its training data into the target dataset Dtrain for
mixed training, and combine its test data into the
evaluation dataset Deval. Additionally, each ability
includes OOD datasets in Deval, allowing us to evaluate the model’s ability to generalize and enhance
performance beyond the ID training environment.
**Factual Reasoning: We select the Multitask Lan-**
guage Understanding (MMLU) (Hendrycks et al.,
2021a) as the ID dataset, which includes multiplechoice questions across 57 subjects. For OOD evaluation, we use the ARC (Clark et al., 2018), comprising both Easy and Challenge segments.
**Mathematical Reasoning:** We select MetaMathQA (Yu et al., 2023) as the ID dataset, which
only has a training set that includes a high-quality
collection of mathematical reasoning questionanswer pairs, derived from GSM8K (Cobbe et al.,
2021) and MATH (Hendrycks et al., 2021b). We
use GSM8K as the ID evaluation and GSM8K+ (Li
et al., 2024a) for OOD evaluation.
**General Complex Reasoning: We chose BIG-**
Bench Hard (BBH) (Suzgun et al., 2022) as the ID
dataset, which includes 27 challenging tasks spanning arithmetic, symbolic reasoning, and more, derived from BIG-Bench (BB) (bench authors, 2023).
Most of the data consists of multiple-choice questions. For OOD evaluation, we use BB-Sub filtered
by CasCoD, and AGIEval (Zhong et al., 2023) subtasks about English multiple-choice questions.
**4.2** **Baselines**
We compare our method with the following baselines: 1) Teacher & Vanilla Student in Zero-shot
(Radford et al., 2019) and Zero-shot-CoT (Kojima
et al., 2022). 2) Fine-tuning involves fine-tuning
a model to generate answers given only questions.
The performance of the baselines above illustrates
the capability of SLMs to solve tasks using only
training data, without external guidance or additional knowledge. 3) CoT distillation includes
ples into the fine-tuning stage can further improve
the comprehension of various task inputs and outputs (Zou et al., 2024). However, directly adding
verbose minority examples to the input would increase the load on the context window and elevate
inference computation and latency. So we propose
a similarity-based instance-level pruning method
to internalize the examples into parameters. For
each training instance (xi, yi), we begin by employing a relevance scoring function sim(·, ·) to
assess the similarity between its and different instances in the training set and select the most K
relevant examples Di[e][:]
_Di[e]_ [=][ {][(][x][j][,][ y]j[)][ |][ x][j] _[∈]_ [top][ K][(][sim][(][x][i][,][ x][j][))][}][ (2)]
Inspired by compression techniques, we propose
instance-level examples pruning to leverage the
performance gains while mitigating the generation
of substantial additional overhead. We gradually
reduce the number of examples from K to 0 over
a total of T schedule steps, to achieve complete
example internalization. The number of examples
_K[t]_ at t-th step can be expressed as:
_K[t]_ = _K_ _t_ (3)
_⌊_ _× S_ _⌋_
Finally, we randomly select K[t] examples from the
set Di[e] [as examples][ e]i[t] [for][ t][-th step fine-tuning.]
**3.3** **SKIntern Pipeline**
**Fine-tuning SLMs with Rationales. For each**
specific schedule step _t, we utilize the compressed_
_S_
symbolic knowledge k[t]i [and pruned examples][ e]i[t] [for]
fine-tuning the SLM pθ with trainable parameters θ
to generate the rationale rij and answer yi for the
question xi as follows:
_n_ _l_
_t(θ) =_ log pθ(rij, yi **_k[t]i[,][ e][t]i[,][ x][i][)]_**
_L_ _−_ _n[1] · l_ _i=1_ _j=1_ _|_
X X
(4)
We aim to minimize the negative log-likelihood
of the sequence comprising the rationale rij and
answer yi, ensuring rationale precedes the answer.
**Progressive Fine-tuning. For a total of T schedule**
steps, we fine-tune the SLM parameters with the
learning rate η for internalizing as follows:
_θt+1 = θt −_ _η∇θLt(θ)_ (5)
**Inference. After progressive fine-tuning, we utilize**
the updated model parameters, denoted as θT, to
conduct inferences without the need for additional
knowledge or examples. Consequently, we can
simply handle the question and complete efficient
and effective inference.
-----
**In-Domain** **Out-Of-Domain** **Rel.**
**Methods** **Avg**
**BBH-test** **GSM8K** **BB-sub** **AGIEval** **GSM8K-PLUS** **ARC-E** **ARC-C** **FLOPs**
_# Closed-source model and Open-source models (Zero-shot-CoT)_
GPT-3.5-turbo (Teacher) 43.2 72.6 44.0 50.5 55.9 91.8 84.1 63.2 -
LLaMA-3-70B-Instruct 62.6 89.2 51.0 66.3 72.9 97.6 93.2 76.1 -
_# TinyLLaMA-1.1B based_
Zero-shot (Radford et al., 2019) 14.0 2.0 17.7 17.8 1.5 19.4 15.0 12.5 _×1.0_
Zero-shot-CoT (Kojima et al., 2022) 13.5 1.4 17.7 10.4 1.3 16.0 13.4 10.5 _×1.0_
Fine-tuning 48.8 3.5 26.0 21.2 3.7 28.0 24.6 22.3 _×0.9_
Knowledge-Augmented Fine-tuning 49.3 3.7 27.4 21.9 3.3 29.4 25.3 22.9 _×3.7_
Std-CoT (Magister et al., 2023) 47.8±.43 7.9±.27 27.6±.31 21.5±.56 4.3±.62 28.2±.69 25.0±.48 23.2 _×1.0_
MT-CoT (Li et al., 2024b) 44.1 _.78_ 4.1 _.35_ 25.0 _.45_ 21.4 _.64_ 2.8 _.83_ 33.5 _.52_ 25.1 _.59_ 22.3 **0.9**
_±_ _±_ _±_ _±_ _±_ _±_ _±_ _×_
Step-by-step (Hsieh et al., 2023) 42.4 _.56_ 4.3 _.47_ 26.2 _.38_ 21.1 _.72_ 3.1 _.54_ 29.6 _.61_ 25.9 _.66_ 21.8 **0.9**
_±_ _±_ _±_ _±_ _±_ _±_ _±_ _×_
KARD (BM25) (Kang et al., 2023) 49.5±.61 7.6±.40 26.9±.43 20.2±.48 4.0±.77 28.2±.85 26.5±.91 23.3 _×3.9_
CasCoD (Dai et al., 2024) 48.1±.49 6.8±.39 23.1±.64 19.4±.73 4.8±.48 29.0±.63 27.1±.42 22.6 _×3.0_
**SKIntern (ours)** **55.5** _.71_ **8.1** _.65_ **31.4** _.44_ **24.4** _.90_ **5.3** _.68_ **36.8** _.89_ **31.2** _.32_ **27.5** 1.0
_±_ _±_ _±_ _±_ _±_ _±_ _±_ _×_
_# LLaMA2-7B based_
Zero-shot (Radford et al., 2019) 17.3 2.7 18.6 19.2 2.4 25.2 20.6 17.0 _×6.4_
Zero-shot-CoT (Kojima et al., 2022) 13.5 3.1 12.2 10.3 2.1 29.1 20.2 12.9 _×6.4_
Fine-tuning 57.8 5.8 33.3 31.0 5.8 73.3 56.3 37.6 _×5.6_
Knowledge-Augmented Fine-tuning 58.7 6.3 34.2 31.8 6.1 75.1 57.0 38.5 _×23.7_
Std-CoT (Magister et al., 2023) 58.1±.74 20.5±.71 30.7±.48 23.6±.65 12.0±.26 73.4±.81 55.9±.78 39.2 _×6.4_
MT-CoT (Li et al., 2024b) 45.6±.43 6.8±.59 27.8±.75 31.7±.89 6.0±.72 74.2±.46 57.6±.38 35.7 _×5.7_
Step-by-step (Hsieh et al., 2023) 54.3 _.37_ 8.4 _.93_ 32.9 _.55_ 32.4 _.64_ 5.9 _.57_ 77.7 _.35_ 61.8 _.87_ 39.1 **5.6**
_±_ _±_ _±_ _±_ _±_ _±_ _±_ _×_
KARD (BM25) (Kang et al., 2023) 58.9±.53 27.5±.71 30.3±.45 18.9±.38 19.1±.73 73.7±.41 57.0±.82 40.8 _×24.5_
CasCoD (Dai et al., 2024) 58.9 _.59_ 29.2 _.75_ 32.2 _.36_ 28.8 _.29_ **21.4** _.79_ 74.7 _.91_ 57.3 _.62_ 43.2 19.0
_±_ _±_ _±_ _±_ _±_ _±_ _±_ _×_
**SKIntern (ours)** **69.3** _.58_ **33.9** _.71_ **37.2** _.51_ **31.3** _.49_ 21.2 _.83_ **78.1** _.24_ **62.1** _.67_ **47.6** 6.4
_±_ _±_ _±_ _±_ _±_ _±_ _±_ _×_
Table 1: Performance (%) of LLaMA2-7B (Touvron et al., 2023) and TinyLLaMA-1.1B (Zhang et al., 2024) with
different methods across seven selected datasets. Bold indicates the best in each setting. We report the mean and
standard deviation of accuracy with 3 different runs for CoT distillation methods. Relative FLOPs cost is calculated
relative to the TinyLLaMA with Zero-shot. We calculate the FLOPs required on BBH-test for each method.
**Std-CoT (Magister et al., 2023) which is the stan-**
dard CoT distillation method, enabling direct finetuning of the student model with CoT data; Step**by-step (Hsieh et al., 2023) is a multi-task method**
that extracts rationales and answers separately; MT**CoT (Li et al., 2024b) is another multi-task method**
that optimizes both answer prediction and CoT
generation simultaneously; CasCoD (Dai et al.,
2024) decomposes the traditional single-step learning process into two cascaded learning steps. 4)
**Knowledge-Augmentation involves attaching re-**
trieved passages to the question during both training and inference. This includes Knowledge**Augmented Fine-tuning focuses on generating**
answers only, and KARD (Kang et al., 2023) emphasizes learning the generation of rationales.
**4.3** **Implementations**
For all experiments, we use the LLaMA3-8B,
LLaMA2-7B (Touvron et al., 2023), Qwen2 (0.5B,
1.5B, 7B) (Yang et al., 2024) and TinyLLaMA1.1B (Zhang et al., 2024) as the student SLM. We
query the teacher model GPT-3.5-turbo to annotate
the CoTs data with the manual prompt (Suzgun
et al., 2022). Unless otherwise specified, T is set
|Col1|4.88× 2.95×|
|---|---|
4.88×
2.95×
Figure 3: Accuracy (%) against FLOPs for varying
model sizes. FLOPs calculations are based on processing all examples from the same task during inference.
to 4 (§4.6), and total epochs E is set to 12.
We employ LoRA (Hu et al., 2022) for
parameter-efficient fine-tuning of the student SLMs.
All experiments are conducted on 2 A100 GPUs
with 80GB. During the inference stage, we utilize
vLLM (Kwon et al., 2023) to accelerate inference.
Detailed information about training, inference and
-----
Figure 4: Efficiency on training data and model size. The backbone model for the data size variation is Qwen2-7B.
hyperparameters is provided in Appendix A.
**4.4** **Main Results**
We report the performance and inference costs of
_SKIntern and baselines in Table 1 and Figure 3_
(More results are shown in Appendix B) and find:
**_SKIntern outperform baselines with fewer_**
**FLOPs.** As shown in Figure 3, when FLOPsmatched (in a vertical comparison), SKIntern outperforms KARD which retrieves documents to augment reasoning, and CasCoD which enhances reasoning by cascaded decomposition. Specifically,
from Table 1, it is evident that SKIntern shows
an average improvement of 8.4% with LLaMA27B and 5.9% with TinyLLaMA-1.1B, respectively.
This highlights the utility of dynamic pruning and
gradual internalization of symbolic knowledge.
**_SKIntern are up to 4× more efficient than_**
**baselines. Table 1 demonstrates that SKIntern uses**
2-4× fewer FLOPs than state-of-the-art KARD and
CasCoD. Although other CoT distillation methods can achieve similar computational savings,
their performance is significantly worse than SKIn_tern (≥_ 8%). Specifically, their performance is
10% lower on the mathematical reasoning dataset
GSM8K and 15% lower on the complex reasoning
dataset BBH. Furthermore, SKIntern achieves comparable performance with fewer FLOPs, as shown
in Figure 3 (in a horizontal comparison).
**4.5** **Efficiency on Dataset and Model Sizes**
To evaluate the efficiency of SKIntern in terms of
training data and model size, we measured test accuracy using Qwen2 (Yang et al., 2024) models
across various methods while varying the amount
of training data and model size. As shown at the
bottom of Figure 4, SKIntern successfully transfers the reasoning ability of the teacher LLM into
the parameters, even with minimal training data.
As the amount of training data increases, SKIntern
consistently outperforms other baselines, with the
improvement magnitude growing as well. This suggests that SKIntern performs optimally across
**different data volumes and achieves superior**
**reasoning ability distillation. Even with a lim-**
ited dataset, SKIntern outperforms other methods,
demonstrating robustness and sample efficiency.
Regarding model size efficiency, as shown at
the top of Figure 4, SKIntern outperforms other
baselines across various model scales. Notably,
_SKIntern enables Qwen2-7B to surpass the teacher_
model, GPT-3.5 Turbo, in both ID and OOD tasks,
despite having fewer parameters. SKIntern offers
substantial advantages for models of varying sizes,
consistently outperforming other methods. These
results underscore the practical benefits of SKIn_tern in resource-limited environments, as it reduces_
the computational demands for SLMs while delivering performance on par with or surpassing larger
models. This further demonstrates that SLMs
**(0.5B) struggle to fully leverage CoT reasoning**
**generated by LLMs, highlighting the need for**
**our SKIntern approach.**
**4.6** **Analysis on Schedule**
**Schedule Pattern. We examine the effectiveness**
of different schedule patterns during the progressive fine-tuning process, focusing on their impact
on reasoning performance. The patterns tested include exponential, inverse exponential, and linear
decay. As shown in Table 2, the linear decay con
-----
**SKIntern** **BBH** **BB** **AGIEval** **GSM8K+** **ARC-E**
_Pattern of Schedule S_
- exp 64.8 36.2 30.0 16.3 76.0
- exp[−][1] 59.5 31.2 28.8 15.4 73.9
- linear **69.3** **37.2** **31.3** **21.2** **78.1**
_Step of Schedule T_
- T = 3 60.2 33.4 29.1 15.5 74.8
- T = 4 **69.3** **37.2** **31.3** **21.2** **78.1**
- T = 7 65.7 35.0 30.0 20.9 76.6
Table 2: Comparison of schedule patterns and steps of
_SKIntern. The backbone model is LLaMA2-7B._
sistently delivers the highest performance, showcasing superior parsing efficiency and language
understanding. In contrast, the inverse exponential
schedule exhibits the lowest effectiveness, while
the exponential decay offers moderate performance
but remains inferior to the linear schedule. These
findings indicate that a gradual, steady reduction
**is more advantageous than a more aggressive**
**approach. Progressive fine-tuning with a linear de-**
cay schedule appears to yield optimal performance
compared to other patterns.
**Schedule Setup. We explore the optimal schedule**
step T for linear decay during progressive finetuning. With the total number of epochs set to
12, we chose the common divisors of 12 for linear decay, where T corresponds to the decay step
plus 1. As seen in Table 2, T = 4 offers the
best performance, while T = 7 shows slightly
lower results, and T = 3 yields the poorest performance. This suggests that overly frequent schedule changes hinder sufficient learning in the initial
stages, whereas sparse schedules cause large, disruptive jumps, complicating smooth progression
and increasing learning difficulty. Therefore, se**lecting an appropriate schedule step is crucial**
**for effectively internalizing knowledge and en-**
**hancing reasoning abilities in SLMs.**
**4.7** **Ablation Studies**
To demonstrate the effectiveness of SKIntern, we
conducted ablation studies using LLaMA2-7B by
creating three variants: (1) w/o k[m], which removes
the learning summary during fine-tuning; (2) w/o
**_k[p], where supplemental knowledge is excluded;_**
and (3) w/o e, where example pruning is omitted.
As shown in Table 3, the removal of any of these
components results in reduced performance, highlighting the critical role of internalizing both knowledge and examples in enhancing SLMs’ complex
reasoning abilities during progressive fine-tuning.
**Methods** **BBH** **BB** **AGIEval** **GSM8K+** **ARC-E**
**SKIntern** **69.3** **37.2** **31.3** **21.2** **78.1**
w/o k[m] 59.8 30.8 28.7 15.3 74.1
w/o k[p] 62.3 32.1 29.5 16.2 75.7
w/o e 61.9 34.1 29.4 18.1 74.6
Table 3: Ablation studies on different components.
Figure 5: Ablation studies of k on vanilla methods.
Additionally, we investigate the effectiveness of
the generated symbolic knowledge (see Figure 5).
Incorporating learning summaries k[m] and supplementary knowledge k[p] into the original zero-shot,
zero-shot-cot, and few-shot-cot significantly enhances performance. Remarkably, this improvement occurs without fine-tuning, demonstrating the
utility and generalization of symbolic knowledge
in augmenting the model’s inference capabilities.
**5** **Conclusion**
In this paper, we introduce SKIntern, a novel CoT
distillation method designed to internalize symbolic knowledge and rich examples into model parameters, thereby enhancing the ability of SLMs
to tackle complex reasoning tasks. Through a systematic schedule, symbolic knowledge generated
by the LLM including learning summaries and supplementary knowledge is compressed and selected
examples are refined. These elements are then used
to fine-tune the SLM, enabling it to produce coherent rationales and accurate answers. We implement
a customized progressive fine-tuning pipeline to
accommodate various schedule steps and training
epochs. Extensive experiments demonstrate that
our method not only improves reasoning performance on both in-domain (ID) and out-of-domain
(OOD) tasks but also significantly accelerates inference and reduces computational resource usage.
-----
**Limitations**
**Method We have demonstrated through SKIntern**
that the performance of SLM on complex inference
tasks can be significantly improved while greatly
reducing computational overhead. However, it is
important to acknowledge the limitations of our
research. The effectiveness of our knowledge enhancement largely depends on the incremental finetuning required to internalize the original symbolic
knowledge and examples, which increases the complexity and cost of training. Additionally, using
LLM to generate supplementary symbolic knowledge necessitates further monetary expenditure due
to API calls.
**Task While our current tests encompass factual**
knowledge, mathematics, and complex reasoning,
the method’s efficacy for different tasks, such as
various coding exercises and extended text tasks,
requires further analysis and experimentation. Additionally, further investigation is needed to determine which types of symbolic knowledge and task
examples are more easily learned and internalized.
**Large Language Models Regarding the experi-**
ments, given our limited computing and financial
budgets, we chose GPT-3.5-Turbo as the teacher.
Using GPT-4 would likely better verify the effectiveness of our method, SKIntern. Additionally,
our aim to enhance the complex reasoning ability
of SLMs restricted our choice to mainstream models, such as Llama2, Llama3, and Qwen2, thereby
excluding other excellent models like Phi3 and
DeepSeek. However, exploring larger LMs such
as 13B and 72B with SKIntern could be of great
interest, presenting a promising direction for future research. Experimental results indicate that
enhancing powerful models like Llama3-8B and
Qwen2-7B surpasses GPT-3.5-Turbo and matches
Llama3-70B.
**Ethical Considerations**
In this paper, we proposed a novel knowledge enhancement method aimed at leveraging the knowledge of LLMs. However, LLMs may generate
inappropriate or discriminatory knowledge. Our
approach does not introduce ethical concerns. The
datasets we used are public, and there are no privacy issues.
**References**
[BIG bench authors. 2023. Beyond the imitation game:](https://openreview.net/forum?id=uyTL5Bvosj)
[Quantifying and extrapolating the capabilities of lan-](https://openreview.net/forum?id=uyTL5Bvosj)
[guage models. Transactions on Machine Learning](https://openreview.net/forum?id=uyTL5Bvosj)
_Research._
Alexis Chevalier, Alexander Wettig, Anirudh Ajith, and
[Danqi Chen. 2023. Adapting language models to](https://doi.org/10.18653/v1/2023.emnlp-main.232)
[compress contexts. In Proceedings of the 2023 Con-](https://doi.org/10.18653/v1/2023.emnlp-main.232)
_ference on Empirical Methods in Natural Language_
_Processing, pages 3829–3846, Singapore. Associa-_
tion for Computational Linguistics.
Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot,
Ashish Sabharwal, Carissa Schoenick, and Oyvind
[Tafjord. 2018. Think you have solved question an-](https://api.semanticscholar.org/CorpusID:3922816)
[swering? try arc, the ai2 reasoning challenge. ArXiv,](https://api.semanticscholar.org/CorpusID:3922816)
abs/1803.05457.
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian,
Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias
Plappert, Jerry Tworek, Jacob Hilton, Reiichiro
Nakano, Christopher Hesse, and John Schulman.
2021. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168.
Chengwei Dai, Kun Li, Wei Zhou, and Songlin Hu.
2024. Improve student’s reasoning generalizability through cascading decomposed cots distillation.
_arXiv preprint arXiv:2405.19842._
Neisarg Dave, Daniel Kifer, C. Lee Giles, and Ankur Ar[jun Mali. 2024. Investigating symbolic capabilities](https://api.semanticscholar.org/CorpusID:269983499)
[of large language models. ArXiv, abs/2405.13209.](https://api.semanticscholar.org/CorpusID:269983499)
Yao Fu, Hao-Chun Peng, Litu Ou, Ashish Sabharwal,
[and Tushar Khot. 2023. Specializing smaller lan-](https://api.semanticscholar.org/CorpusID:256390607)
[guage models towards multi-step reasoning. ArXiv,](https://api.semanticscholar.org/CorpusID:256390607)
abs/2301.12726.
Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou,
Mantas Mazeika, Dawn Song, and Jacob Steinhardt.
[2021a. Measuring massive multitask language under-](https://openreview.net/forum?id=d7KBjmI3GmQ)
[standing. In International Conference on Learning](https://openreview.net/forum?id=d7KBjmI3GmQ)
_Representations._
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul
Arora, Steven Basart, Eric Tang, Dawn Song, and
Jacob Steinhardt. 2021b. Measuring mathematical
problem solving with the math dataset. NeurIPS.
Namgyu Ho, Laura Schmid, and Se-Young Yun. 2022.
[Large language models are reasoning teachers. In](https://api.semanticscholar.org/CorpusID:254877399)
_Annual Meeting of the Association for Computational_
_Linguistics._
Cheng-Yu Hsieh, Chun-Liang Li, Chih-Kuan Yeh,
Hootan Nakhost, Yasuhisa Fujii, Alexander Ratner,
Ranjay Krishna, Chen-Yu Lee, and Tomas Pfister.
2023. Distilling step-by-step! outperforming larger
language models with less training data and smaller
model sizes. arXiv preprint arXiv:2305.02301.
Edward J Hu, yelong shen, Phillip Wallis, Zeyuan AllenZhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu
-----
[Chen. 2022. LoRA: Low-rank adaptation of large](https://openreview.net/forum?id=nZeVKeeFYf9)
[language models. In International Conference on](https://openreview.net/forum?id=nZeVKeeFYf9)
_Learning Representations._
Gautier Izacard, Mathilde Caron, Lucas Hosseini, Sebastian Riedel, Piotr Bojanowski, Armand Joulin,
[and Edouard Grave. 2021. Unsupervised dense infor-](https://doi.org/10.48550/ARXIV.2112.09118)
[mation retrieval with contrastive learning.](https://doi.org/10.48550/ARXIV.2112.09118)
Huiqiang Jiang, Qianhui Wu, Xufang Luo, Dongsheng
Li, Chin-Yew Lin, Yuqing Yang, and Lili Qiu. 2023.
[Longllmlingua: Accelerating and enhancing llms](https://arxiv.org/abs/2310.06839)
[in long context scenarios via prompt compression.](https://arxiv.org/abs/2310.06839)
_Preprint, arXiv:2310.06839._
Minki Kang, Seanie Lee, Jinheon Baek, Kenji
Kawaguchi, and Sung Ju Hwang. 2023. Knowledgeaugmented reasoning distillation for small language
models in knowledge-intensive tasks. In Advances in
_Neural Information Processing Systems 37: Annual_
_Conference on Neural Information Processing Sys-_
_tems 2023, NeurIPS 2023, December 10-16, 2023,_
_New Orleans._
Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B
Brown, Benjamin Chess, Rewon Child, Scott Gray,
Alec Radford, Jeffrey Wu, and Dario Amodei. 2020.
Scaling laws for neural language models. _arXiv_
_preprint arXiv:2001.08361._
Vladimir Karpukhin, Barlas O˘guz, Sewon Min, Patrick
Lewis, Ledell Yu Wu, Sergey Edunov, Danqi
[Chen, and Wen tau Yih. 2020. Dense passage re-](https://api.semanticscholar.org/CorpusID:215737187)
[trieval for open-domain question answering. ArXiv,](https://api.semanticscholar.org/CorpusID:215737187)
abs/2004.04906.
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large language models are zero-shot reasoners. Advances in
_neural information processing systems, 35:22199–_
22213.
Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying
Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E.
Gonzalez, Hao Zhang, and Ion Stoica. 2023. Efficient memory management for large language model
serving with pagedattention. In Proceedings of the
_ACM SIGOPS 29th Symposium on Operating Systems_
_Principles._
Qintong Li, Leyang Cui, Xueliang Zhao, Lingpeng
[Kong, and Wei Bi. 2024a. Gsm-plus: A compre-](https://api.semanticscholar.org/CorpusID:268063753)
[hensive benchmark for evaluating the robustness](https://api.semanticscholar.org/CorpusID:268063753)
[of llms as mathematical problem solvers.](https://api.semanticscholar.org/CorpusID:268063753) _ArXiv,_
abs/2402.19255.
Shiyang Li, Jianshu Chen, yelong shen, Zhiyu Chen,
Xinlu Zhang, Zekun Li, Hong Wang, Jing Qian,
Baolin Peng, Yi Mao, Wenhu Chen, and Xifeng Yan.
[2024b. Explanations from large language models](https://openreview.net/forum?id=rH8ZUcfL9r)
[make small reasoners better. In 2nd Workshop on](https://openreview.net/forum?id=rH8ZUcfL9r)
_Sustainable AI._
Yucheng Li, Bo Dong, Chenghua Lin, and Frank
[Guerin. 2023. Compressing context to enhance infer-](https://arxiv.org/abs/2310.06201)
[ence efficiency of large language models. Preprint,](https://arxiv.org/abs/2310.06201)
arXiv:2310.06201.
Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, and Percy
[Liang. 2023. Lost in the middle: How language mod-](https://api.semanticscholar.org/CorpusID:259360665)
[els use long contexts. Transactions of the Association](https://api.semanticscholar.org/CorpusID:259360665)
_for Computational Linguistics, 12:157–173._
Lucie Charlotte Magister, Jonathan Mallinson, Jakub
Adamek, Eric Malmi, and Aliaksei Severyn. 2023.
[Teaching small language models to reason. In Pro-](https://doi.org/10.18653/v1/2023.acl-short.151)
_ceedings of the 61st Annual Meeting of the Associa-_
_tion for Computational Linguistics (Volume 2: Short_
_Papers), pages 1773–1781, Toronto, Canada. Associ-_
ation for Computational Linguistics.
Jesse Mu, Xiang Lisa Li, and Noah D. Goodman.
[2023. Learning to compress prompts with gist to-](https://api.semanticscholar.org/CorpusID:258179012)
[kens. ArXiv, abs/2304.08467.](https://api.semanticscholar.org/CorpusID:258179012)
Zhuoshi Pan, Qianhui Wu, Huiqiang Jiang, Menglin
Xia, Xufang Luo, Jue Zhang, Qingwei Lin, Victor Rühle, Yuqing Yang, Chin-Yew Lin, H. Vicky
Zhao, Lili Qiu, Dongmei Zhang, Karl Cobbe, Vineet
Kosaraju, Mo Bavarian, Mark Chen, Heewoo Jun,
Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Ja[cob Hilton, and Reiichiro Nakano. 2024. Llmlingua-](https://api.semanticscholar.org/CorpusID:268531237)
[2: Data distillation for efficient and faithful task-](https://api.semanticscholar.org/CorpusID:268531237)
[agnostic prompt compression. In Annual Meeting of](https://api.semanticscholar.org/CorpusID:268531237)
_the Association for Computational Linguistics._
Adam Paszke, Sam Gross, Francisco Massa, Adam
Lerer, James Bradbury, Gregory Chanan, Trevor
Killeen, Zeming Lin, Natalia Gimelshein, Luca
Antiga, Alban Desmaison, Andreas Köpf, Edward
Yang, Zach DeVito, Martin Raison, Alykhan Tejani,
Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. PyTorch: an
_imperative style, high-performance deep learning li-_
_brary. Curran Associates Inc., Red Hook, NY, USA._
Alec Radford, Jeffrey Wu, Rewon Child, David Luan,
Dario Amodei, Ilya Sutskever, et al. 2019. Language
models are unsupervised multitask learners. OpenAI
_blog, 1(8):9._
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and
[Percy Liang. 2016. SQuAD: 100,000+ questions for](https://doi.org/10.18653/v1/D16-1264)
[machine comprehension of text. In Proceedings of](https://doi.org/10.18653/v1/D16-1264)
_the 2016 Conference on Empirical Methods in Natu-_
_ral Language Processing, pages 2383–2392, Austin,_
Texas. Association for Computational Linguistics.
Stephen Robertson and Hugo Zaragoza. 2009. [The](https://doi.org/10.1561/1500000019)
[probabilistic relevance framework: Bm25 and be-](https://doi.org/10.1561/1500000019)
[yond. Found. Trends Inf. Retr., 3(4):333–389.](https://doi.org/10.1561/1500000019)
KaShun Shum, Shizhe Diao, and Tong Zhang. 2023.
Automatic prompt augmentation and selection with
chain-of-thought from labeled data. arXiv preprint
_arXiv:2302.12822._
Mirac Suzgun, Nathan Scales, Nathanael Scharli, Sebastian Gehrmann, Yi Tay, Hyung Won Chung,
Aakanksha Chowdhery, Quoc V. Le, Ed Huai hsin
[Chi, Denny Zhou, and Jason Wei. 2022. Challenging](https://api.semanticscholar.org/CorpusID:252917648)
[big-bench tasks and whether chain-of-thought can](https://api.semanticscholar.org/CorpusID:252917648)
[solve them. In Annual Meeting of the Association for](https://api.semanticscholar.org/CorpusID:252917648)
_Computational Linguistics._
-----
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay
Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti
Bhosale, et al. 2023. Llama 2: Open foundation and fine-tuned chat models. _arXiv preprint_
_arXiv:2307.09288._
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou,
et al. 2022. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural
_information processing systems, 35:24824–24837._
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien
Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen,
Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu,
Teven Le Scao, Sylvain Gugger, Mariama Drame,
[Quentin Lhoest, and Alexander Rush. 2020. Trans-](https://doi.org/10.18653/v1/2020.emnlp-demos.6)
[formers: State-of-the-art natural language processing.](https://doi.org/10.18653/v1/2020.emnlp-demos.6)
In Proceedings of the 2020 Conference on Empirical
_Methods in Natural Language Processing: System_
_Demonstrations, pages 38–45, Online. Association_
for Computational Linguistics.
Fangyuan Xu, Weijia Shi, and Eunsol Choi. 2023.
[Recomp: Improving retrieval-augmented lms with](https://api.semanticscholar.org/CorpusID:263830734)
[compression and selective augmentation.](https://api.semanticscholar.org/CorpusID:263830734) _ArXiv,_
abs/2310.04408.
Xiaohan Xu, Ming Li, Chongyang Tao, Tao Shen,
Reynold Cheng, Jinyang Li, Can Xu, Dacheng Tao,
and Tianyi Zhou. 2024. A survey on knowledge distillation of large language models. arXiv preprint
_arXiv:2402.13116._
An Yang, Baosong Yang, Binyuan Hui, Bo Zheng,
Bowen Yu, Chang Zhou, Chengpeng Li, Chengyuan
Li, Dayiheng Liu, Fei Huang, Guanting Dong, Haoran Wei, Huan Lin, Jialong Tang, Jialin Wang, Jian
Yang, Jianhong Tu, Jianwei Zhang, Jianxin Ma, Jin
Xu, Jingren Zhou, Jinze Bai, Jinzheng He, Junyang
Lin, Kai Dang, Keming Lu, Ke-Yang Chen, Kexin
Yang, Mei Li, Min Xue, Na Ni, Pei Zhang, Peng
Wang, Ru Peng, Rui Men, Ruize Gao, Runji Lin,
Shijie Wang, Shuai Bai, Sinan Tan, Tianhang Zhu,
Tianhao Li, Tianyu Liu, Wenbin Ge, Xiaodong Deng,
Xiaohuan Zhou, Xingzhang Ren, Xinyu Zhang, Xipin
Wei, Xuancheng Ren, Yang Fan, Yang Yao, Yichang
Zhang, Yunyang Wan, Yunfei Chu, Zeyu Cui, Zhenru
[Zhang, and Zhi-Wei Fan. 2024. Qwen2 technical](https://api.semanticscholar.org/CorpusID:271212307)
[report. ArXiv.](https://api.semanticscholar.org/CorpusID:271212307)
Jiacheng Ye, Zhiyong Wu, Jiangtao Feng, Tao Yu, and
Lingpeng Kong. 2023. Compositional exemplars for
in-context learning. In International Conference on
_Machine Learning, pages 39818–39833. PMLR._
Jiahao Ying, Mingbao Lin, Yixin Cao, Wei Tang,
Bo Wang, Qianru Sun, Xuanjing Huang, and
Shuicheng Yan. 2024. Llms-as-instructors: Learning
from errors toward automating model improvement.
_arXiv preprint arXiv:2407.00497._
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu,
Zhengying Liu, Yu Zhang, James T Kwok, Zhenguo Li, Adrian Weller, and Weiyang Liu. 2023.
Metamath: Bootstrap your own mathematical questions for large language models. _arXiv preprint_
_arXiv:2309.12284._
Peiyuan Zhang, Guangtao Zeng, Tianduo Wang, and
[Wei Lu. 2024. Tinyllama: An open-source small](https://api.semanticscholar.org/CorpusID:266755802)
[language model. ArXiv, abs/2401.02385.](https://api.semanticscholar.org/CorpusID:266755802)
Yaowei Zheng, Richong Zhang, Junhao Zhang, Yanhan
Ye, Zheyan Luo, Zhangchi Feng, and Yongqiang Ma.
[2024. Llamafactory: Unified efficient fine-tuning](http://arxiv.org/abs/2403.13372)
[of 100+ language models. In Proceedings of the](http://arxiv.org/abs/2403.13372)
_62nd Annual Meeting of the Association for Compu-_
_tational Linguistics (Volume 3: System Demonstra-_
_tions), Bangkok, Thailand. Association for Computa-_
tional Linguistics.
Wanjun Zhong, Ruixiang Cui, Yiduo Guo, Yaobo Liang,
Shuai Lu, Yanlin Wang, Amin Saied, Weizhu Chen,
and Nan Duan. 2023. Agieval: A human-centric
benchmark for evaluating foundation models. arXiv
_preprint arXiv:2304.06364._
Jiaru Zou, Meng Zhou, Tao Li, Shi Han, and Dong[mei Zhang. 2024. Promptintern: Saving inference](https://api.semanticscholar.org/CorpusID:270878548)
[costs by internalizing recurrent prompt during large](https://api.semanticscholar.org/CorpusID:270878548)
[language model fine-tuning. ArXiv, abs/2407.02211.](https://api.semanticscholar.org/CorpusID:270878548)
**A** **Experimantal Settings**
**A.1** **Datasets**
For each ability, we select a relevant public dataset,
integrate its training data into the target dataset
_Dtrain for mixed training, and combine its test data_
into the evaluation dataset Deval. Additionally, each
ability includes an OOD dataset in Deval. This
setup allows us to evaluate the model’s ability to
generalize and enhance performance beyond the
ID training environment.
Table 4 shows the statistics details of the selected
datasets.
For MMLU (Hendrycks et al., 2021a), we adhere
to previous prompt styles (Suzgun et al., 2022),
instructing the teacher model (e.g., GPT-3.5-Turbo)
to generate answers and Chains of Thought (CoT).
By excluding samples with incorrect answers, we
ultimately obtained a total of 1,556 samples. For
MetaMathQA (Yu et al., 2023), we acquired 3,500
samples through random sampling. For BB (bench
authors, 2023), we followed the CasCoD (Dai et al.,
2024) methodology by filtering the original dataset
for tasks containing the keyword "multiple choice"
and randomly extracting up to 100 examples for
each task. Note that tasks in BBH do not involve
BB-sub.
-----
**Abilities** **Task** **# Train** **# Train (Filtered)** **# Test**
ID: MMLU Dev + Val: 1,815 1,555 -
Factuality OOD: ARC-C - - 1,172
OOD: ARC-E - - 2,376
ID: MetaMathQA 395,000 3,500 -
Math OOD: GSM8K - - 1,319
OOD: GSM8K-PLUS - - 1,400
ID: BBH 6,511 3,805 1,304
Reasoning OOD: BB-sub - - 5,384
OOD: AGIEval - - 2,546
**All** **Sum** - 8,860 15,501
Table 4: Statistical details of the selected datasets. Since MMLU lacks official training data, we combined the
development and validation datasets to form a training set. To maintain sample balance, we matched the size of
MetaMathQA to that of BBH. We obtained balanced samples from two dataset augmentation modes, MATH_Aug
and GSM_Aug, resulting in a total of 3,500 samples.
**A.3** **Implementations**
Our implementations are based on huggingface
transformers v4.42.1 (Wolf et al., 2020) using PyTorch v2.3.1 (Paszke et al., 2019) and LlamaFactory (Zheng et al., 2024).
For CasCoD (Dai et al., 2024), we adhere to
the optimal settings recommended by the authors,
specifically setting α to 0.3. For KARD (Kang
et al., 2023), we employ the BM25 configuration
(Robertson and Zaragoza, 2009), a sparse retrieval
method based on word frequency, and retrieve three
documents per question. Wikipedia serves as the
external knowledge base for all datasets. For all
retrievers used in SKIntern, including BM25, Contriever (Izacard et al., 2021), and DPR (Karpukhin
et al., 2020), we utilize the Pyserini[1] library, which
offers a reproducible information retrieval framework.
**A.4** **Symbolic Knowledge Collection**
For specialized knowledge collection, using 2-shot
hand-written examples, the teacher model is configured with a temperature of 0.8 and a maximum
length of 1024 tokens. It generates specialized
knowledge corresponding to each incorrect example produced by the student SLMs. The prompt
can be found in the Appendix D.2.
**B** **Extended Results**
In Table 7, we present the results of various models discussed in this paper, including LLaMA3-8B,
1https://github.com/castorini/pyserini
During the evaluation stage, we use Exact Match
(Rajpurkar et al., 2016) as the evaluation metric.
The answer generation between the involved
models is conducted in a zero-shot setting, with
all models set to a temperature of 0.8 and a maximum token length of 1024. The prompt can be
found in the Appendix D.1.
**A.2** **Hyperparameter**
The complete set of stable hyperparameters used
for both baseline models and the proposed SKIntern
training and inference runs can be found in Table 5
and Table 6, respectively.
In our research, we ensured consistent hyperparameter settings across all baselines, including the
proposed SKIntern method, to maintain the fairness
of our comparative analysis. Detailed hyperparameters and their explanations are presented below.
For SKIntern, particularly in the fourth step, we
reduced the enhanced distillation parameters to 3
epochs and fixed the batch size at 8, as the concatenation of specialized knowledge results in longer inputs. We maintained a consistent batch size across
all baselines to eliminate any performance differences attributable to varying batch sizes, which depend on model size, with larger models use smaller
batch sizes. The learning rate, a key parameter affecting model performance, was set to 5e-5, 1e-4,
2e-4, and 3e-4 in a series of experiments, revealing
that larger models require smaller learning rates.
Consequently, we adjusted the learning rate according to model size.
-----
**Hyperparameter** **TinyLLaMA-1.1B** **LLaMA2-7B** **LLaMA3-8B** **Qwen2-0.5B** **Qwen2-1.5B** **Qwen2-7B**
Max Input Len 2048 4096 4096 4096 4096 4096
Max Output Len 128 128 128 128 128 128
Optimizer AdamW AdamW AdamW AdamW AdamW AdamW
Learning Rate 2e-4 1e-4 5e-5 2e-4 1e-4 1e-4
precision fp16 fp16 fp16 fp16 fp16 fp16
# Training epochs 12 12 12 12 12 12
# Warmup Steps 10% of total training steps
Batch Size 32 16 8 32 16 8
Gradient Accumulation 1 2 4 1 2 4
rank of LoRA 32 32 32 32 32 32
Table 5: Training hyperparameters.
**D** **Instruction Details**
**D.1** **Prompt for Generating CoTs**
We use the prompt template shown below to call the
teacher model to generate the CoTs for the training
datasets.
**Teacher**
**Hyperparameter** **Student**
Rationale Reasoning
do_sample False True False
temperature 0.6 0.8 0.6
top-p 0.95 1.0 0.95
top-k 50 50 50
max_new_tokens 1024 2048 1024
# return sequences 1 2 1
Table 6: Generation configs of students and teachers.
QWen2-0.5B, 1.5B, and 7B, utilizing different baseline methods along with the outcomes of SKIntern.
**C** **Case Study**
We present two cases from Tables 8 and 9 to compare the Chains of Thought (CoTs) generated by
_SKIntern, the teacher large language model (LLM),_
and the standard CoTs distillation method (StdCoT). We use ✓ and ✗ to indicate the correctness
of the CoT.
Table 8 shows that the Std-CoT’s response is
confused and fails to comprehend the question accurately. Although it has a rough idea, its rationale
is entirely incorrect as it struggles to emulate the
rationale of the teacher LLM.
Table 9 presents the symbolic knowledge generated by the LLM for a training example in BBH,
encompassing learning summaries and supplementary information. This symbolic knowledge offers
detailed logical reasoning and positional insights,
which assist the LLM in understanding and solving
these problems.
-----
**In-Domain** **Out-Of-Domain** **Rel.**
**Methods** **Avg**
**BBH-test** **GSM8K** **BB-sub** **AGIEval** **GSM8K-PLUS** **ARC-E** **ARC-C** **FLOPs**
_# Closed-source model and Open-source models (Zero-shot-CoT)_
GPT-3.5-turbo (Teacher) 43.2 72.6 44.0 50.5 55.9 91.8 84.1 63.2 -
LLaMA-3-70B-Instruct 62.6 89.2 51.0 66.3 72.9 97.6 93.2 76.1 -
_# LLaMA-3-8B based_
Zero-shot (Radford et al., 2019) 18.2 2.8 27.4 29.7 2.2 50.8 50.0 25.9 _×6.2_
Zero-shot-CoT (Kojima et al., 2022) 26.5 6.6 23.5 32.2 3.7 68.1 55.5 30.9 _×6.2_
Fine-tuning 43.7 11.7 29.1 35.3 9.4 75.2 65.2 38.5 _×5.4_
Knowledge-Augmented Fine-tuning 30.4 9.9 14.4 13.0 8.5 40.8 33.9 21.6 _×23.3_
Std-CoT (Magister et al., 2023) 79.4 61.6 40.5 41.3 45.6 83.2 71.9 60.5 _×6.2_
MT-CoT (Li et al., 2024b) 62.8 13.1 36.3 **43.9** 11.4 83.6 72.3 46.3 _×5.5_
Step-by-step (Hsieh et al., 2023) 64.0 11.5 38.8 43.7 9.0 84.3 74.6 46.6 _×5.4_
KARD (BM25) (Kang et al., 2023) **81.4** **64.3** 43.1 43.4 **48.6** 85.6 **76.1** 63.2 _×24.2_
CasCoD (Dai et al., 2024) 32.1 59.1 18.1 23.6 46.1 34.6 27.7 34.5 _×17.7_
**SKIntern (ours)** 80.8 62.5 42.8 43.6 48.1 **89.9** 75.9 **63.4** _×6.2_
_# Qwen2-0.5B based_
Std-CoT (Magister et al., 2023) 65.8 26.7 29.6 25.6 17.1 43.6 32.0 34.3 _×0.4_
MT-CoT (Li et al., 2024b) 47.2 5.3 30.5 **27.7** 4.4 46.0 35.1 28.0 _×0.4_
Step-by-step (Hsieh et al., 2023) 44.2 5.2 28.9 26.2 3.1 41.8 36.2 26.5 _×0.4_
KARD (BM25) (Kang et al., 2023) **66.3** **30.9** 31.7 23.9 18.2 **48.9** **37.2** **36.7** _×1.7_
CasCoD (Dai et al., 2024) 37.6 27.7 20.0 15.6 17.6 21.5 14.8 22.1 _×1.2_
**SKIntern (ours)** 65.9 **30.9** **30.8** 27.0 **18.5** 48.5 35.6 **36.7** _×0.4_
_# Qwen2-1.5B based_
Std-CoT (Magister et al., 2023) 68.2 52.7 35.7 34.0 37.3 69.3 56.4 50.5 _×1.3_
MT-CoT (Li et al., 2024b) 58.0 6.7 36.4 34.2 6.1 72.7 57.5 38.8 _×1.1_
Step-by-step (Hsieh et al., 2023) 48.4 5.8 32.8 34.4 6.1 72.1 57.6 36.7 _×1.1_
KARD (BM25) (Kang et al., 2023) **72.2** **55.4** **37.4** 31.2 39.4 74.0 62.2 53.1 _×5.2_
CasCoD (Dai et al., 2024) 31.7 53.4 25.4 24.7 38.8 57.1 47.8 39.8 _×3.8_
**SKIntern (ours)** 70.1 54.8 36.5 **36.3** **41.8** **76.5** **62.7** **54.1** _×1.3_
_# Qwen2-7B based_
Std-CoT (Magister et al., 2023) **80.7** 71.5 43.4 **49.9** 60.0 90.5 80.3 68.0 _×6.0_
MT-CoT (Li et al., 2024b) 70.0 15.2 42.6 49.4 12.1 90.9 80.2 51.5 _×5.3_
Step-by-step (Hsieh et al., 2023) 68.8 15.2 41.2 49.1 10.9 72.1 71.8 47.0 _×5.2_
KARD (BM25) (Kang et al., 2023) 80.2 75.3 43.2 49.6 60.6 92.1 **83.5** 69.2 _×23.6_
CasCoD (Dai et al., 2024) 35.7 72.3 23.8 37.4 60.6 70.1 63.1 51.9 _×17.4_
**SKIntern (ours)** 78.8 **76.1** **45.2** 49.4 **61.5** **93.0** **83.5** **69.6** _×6.0_
Table 7: Performance (%) of LLaMA3-8B (Touvron et al., 2023) and Qwen2-0.5B, 1.5B and 7B (Yang et al., 2024)
with different methods across seven selected datasets. Bold indicates the best in each setting. Relative FLOPs cost
is calculated relative to the TinyLLaMA with Zero-shot. We calculate the FLOPs required on BBH-test for each
method.
Generate CoTs
You are an expert assistant teacher. The following are tasks about {Task_Name}. {Task Description}. Explain your reasoning first and your response should conclude with the format “Therefore,
the answer is".
Question: {QUESTION}
Answer: Let’s think step by step.
**D.2** **Prompt for Specialized Knowledge**
**Collection**
**Generate Learning Summary only prompts**
LLMs to analyze the SLM’s errors and generate
the specialized knowledge of learning summary.
-----
Question: Kylar went to the store to buy glasses for his new apartment. One glass costs $5, but every second glass costs
only 60% of the price. Kylar wants to buy 16 glasses. How much does he need to pay for them?
**Symbolic Knowledge** **Output**
**Teacher** - The discount price of one glass is 60/100 * 5 =
$«60/100*5=3»3. If every second glass is cheaper,
that means Kylar is going to buy 16 / 2 = «16/2=8»8
cheaper glasses. So for the cheaper glasses, Kylar
is going to pay 8 * 3 = $«8*3=24»24. And for
the regular-priced glasses, Kylar will pay 8 * 5 =
$«8*5=40»40. So in total Kylar needs to pay 24 +
40 = $«24+40=64»64 for the glasses he wants to buy.
Therefore, the answer is 64. ✓
**Std-CoT** - 3.0 times a year means Pauline goes to the museum
3.0 * 5.0 = 15.0 times in 5.0 years. The cost of one
visit is $2.00, but in the last 5.0 years, it has increased
by 1.5 times, so the new cost is $2.00 * 1.5 = $3.00.
So, Pauline spent 15.0 * 3.0 = $45.00 on all visits to
the museum in dollars. Therefore, the answer is 45.
**KARD** 1. Conversely, deadweight loss can come from consumers if they buy a product even if it costs more
than it benefits them. To describe this, if the same
nail market had the government giving a $0.03 subsidy to every nail produced, the subsidy would push
the market price of each nail down to $0.07(...) 2.
Several glasses are utilized in transparent armor, such
as normal plate glass (soda-lime-silica), borosilicate
glasses, and fused silica. Plate glass has been the
most common glass used due to its low cost(...) 3.
To be eligible for the program, applicants must be
in financial need, have had a recent eye exam, and
have no other resources available to them to pay for
glasses, including federal or state programs or assistance from local charitable organizations(...)
The cost of one glass is $5. Kylar wants to buy 16
glasses, so he needs to pay for 16 glasses + 1 glass
= 17 glasses. Since every second glass costs 60% of
the price, the cost of every second glass is 60% of $5
= $3. So, the total cost of the 17 glasses is $5 x 17 +
$3 x (17/2) = $85 + $10. Therefore, Kylar needs to
pay $85 + $10 = $95 for the glasses. Therefore, the
answer is 95. ✗
**_SKIntern_** - 16 glasses is an even number, so every second glass
will cost 60% of the price. Kylar wants to buy 16
glasses, and every second glass costs 60% of the
price, so he needs to buy 16/2 = 8 glasses at the full
price of $5/glass. He also needs to buy 8 glasses at
the reduced price of 60% of $5/glass, which is 60/100
- $5 = $3/glass. So in total, Kylar needs to pay 8 * $5
+ 8 * $3 = $40 + $24 = $64 for the glasses. Therefore,
the answer is 64. ✓
Table 8: A inference case from GSM8K.
-----
Question: If you follow these instructions, do you return to the starting point?
Take 7 steps. Take 9 steps. Turn around. Turn right.
Options:
- Yes
- No
**Symbolic Knowledge**
**Learning Summary: When solving logical reasoning problems involving directions and movements,**
it’s essential to follow the steps methodically and visualize the movements if necessary. 1. Question
Comprehension: Begin by understanding the sequence of movements and turns. Identify the starting point
and note the instructions given, such as steps taken and direction changes. 2. Thought Steps: - Start by
taking note of the initial position and direction. - Follow the movement instructions step by step: - Take 7
steps in the initial direction. - Take 9 steps in the same direction. - Turn around (180-degree turn) which
reverses your direction. - Turn right, which changes your direction perpendicularly. - After executing
these steps, assess whether you return to the original position or direction. 3. Visualization: Drawing a
simple diagram or using a grid can help track the positions and directions. This visualization helps verify
whether the initial and final positions match. 4. Summative Experience: For similar questions, always
track each movement and turn carefully. Be aware of the effects of each instruction, particularly turns,
which change direction.
**Supplementary Knowledge: 1. Understanding Directions: - Familiarize yourself with basic directions**
(e.g., north, south, east, west) and understand relative turns (left, right, and turn around). - A 180degree turn changes direction to the opposite, while a 90-degree right or left turn changes the direction
perpendicularly. 2. Visualization Techniques: - Use diagrams, sketches, or grids to map directions and
movements to see the path clearly. - Visual aids can help prevent confusion, especially when multiple
turns are involved. 3. Logical Sequencing: - Carefully follow each step in the sequence as instructed.
Misinterpreting a step or turn can lead to incorrect conclusions. - Practice breaking down instructions
into smaller parts to manage them more effectively. 4. Definitions: - Turn Around: A 180-degree turn
where you face the opposite direction from where you started. - Right Turn: A 90-degree turn to the right,
changing the direction perpendicular to the current path. By practicing these steps and understanding the
underlying concepts, students can improve their ability to solve similar direction-based logical reasoning
problems.
Table 9: A symbolic knowledge generation case from BBH-test.
-----
Generate Learning Summary
As an excellent educational teacher, your goal is to help students enhance their question-solving
abilities.
Based on an understanding and explanation of the question, along with relevant background
knowledge, fundamental concepts, and empirical conclusions, please generate a learning summary
in a numbered list format that will help students complete the same task in the future.
### Requirements:
1. Learning summary should outline the thought processes and precautions for addressing student
mistakes, including, but not limited to, question comprehension, thought steps and mathematical
calculations. It should also provide a summative experience to help students solve similar questions
in the future.
2. Ensure that the content is understandable and usable by students, while also being concise and
effective.
3. The obtained learning summary should be general and generalized, not aimed at specific
questions.
4. Keep these requirements in mind while generating the learning summary and supplementary
knowledge.
### Return Format:
Return in the following format:
Learning Summary: [Learning Summary]
Question: {QUESTION}
Answer: {ANSWER}
Please follow the requirements and provide the learning summary.
**Generate Learning Summary and Supplemen-**
**tary Knowledge prompts LLMs to analyze the**
SLM’s errors and generate the specialized knowledge of learning summary and Supplementary
Knowledge, providing additional relevant background knowledge to further assist SLMs in solving
similar complex reasoning tasks in the future.
-----
Generate Learning Summary and Supplementary Knowledge
As an excellent educational teacher, your goal is to help students enhance their question-solving
abilities and to aid students in completing the same task in the future.
You should generate targeted, detailed thought processes and relevant background knowledge for
solving similar questions in the future.
Your role involves creating learning summaries and supplementary knowledge, specifically identifying the steps needed to solve the question and providing additional general knowledge in the
supplementary knowledge.
### Requirements:
1. Learning summary should outline the thought processes including, but is not limited to, question
comprehension, thought steps and mathematical calculations. It should also provide a summative
experience to help students solve similar questions in the future.
2. Supplementary knowledge should include a list of essential background information that
students need to solve the question. This should encompass, but is not limited to, mathematical
formulas, definitions, relevant world knowledge, and specific techniques.
3. Ensure that the content is understandable and usable by students, while also being concise and
effective.
4. The obtained learning summary should be general and generalized, not aimed at specific
problems, and the supplementary knowledge should also be general knowledge of the problem
without involving specific analysis.
5. Keep these requirements in mind while generating the learning summary and supplementary
knowledge.
### Return Format:
Return in the following format:
Learning Summary: [Learning Summary]
Supplementary Knowledge: [Supplementary Knowledge]
Question: {QUESTION}
Answer: {ANSWER}
Please follow the requirements and provide the learning summary and supplementary knowledge.
-----
| [
"Jun, Zhao",
"Huanxuan, Liao",
"Xiang, Li",
"Yupu, Hao",
"Yuanzhe, Zhang",
"Shizhu, He",
"Kang, Liu"
] | 2024-09-19T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2409.13183 | null | null |
rStar-Math: Small LLMs Can Master Math Reasoning with Self-Evolved Deep Thinking | We present rStar-Math to demonstrate that small language models (SLMs) can rival or even surpass the math reasoning capability of OpenAI o1, without distillation from superior models. rStar-Math achieves this by exercising "deep thinking" through Monte Carlo Tree Search (MCTS), where a math policy SLM performs test-time search guided by an SLM-based process reward model. rStar-Math introduces three innovations to tackle the challenges in training the two SLMs: (1) a novel code-augmented CoT data sythesis method, which performs extensive MCTS rollouts to generate step-by-step verified reasoning trajectories used to train the policy SLM; (2) a novel process reward model training method that avoids na\"ive step-level score annotation, yielding a more effective process preference model (PPM); (3) a self-evolution recipe in which the policy SLM and PPM are built from scratch and iteratively evolved to improve reasoning capabilities. Through 4 rounds of self-evolution with millions of synthesized solutions for 747k math problems, rStar-Math boosts SLMs' math reasoning to state-of-the-art levels. On the MATH benchmark, it improves Qwen2.5-Math-7B from 58.8% to 90.0% and Phi3-mini-3.8B from 41.4% to 86.4%, surpassing o1-preview by +4.5% and +0.9%. On the USA Math Olympiad (AIME), rStar-Math solves an average of 53.3% (8/15) of problems, ranking among the top 20% the brightest high school math students. Code and data will be available at https://github.com/microsoft/rStar. | null | ## rStar-Math: Small LLMs Can Master Math Reasoning with Self-Evolved Deep Thinking
**Xinyu Guan[∗]** **Li Lyna Zhang[∗⋄]** **Yifei Liu**
**Ning Shang** **Youran Sun** **Yi Zhu** **Fan Yang** **Mao Yang**
Microsoft Research Asia
### Abstract
We present rStar-Math to demonstrate that small language models (SLMs) can rival
or even surpass the math reasoning capability of OpenAI o1, without distillation
from superior models. rStar-Math achieves this by exercising “deep thinking”
through Monte Carlo Tree Search (MCTS), where a math policy SLM performs
test-time search guided by an SLM-based process reward model. rStar-Math
introduces three innovations to tackle the challenges in training the two SLMs:
**(1) a novel code-augmented CoT data sythesis method, which performs extensive**
MCTS rollouts to generate step-by-step verified reasoning trajectories used to train
the policy SLM; (2) a novel process reward model training method that avoids naïve
step-level score annotation, yielding a more effective process preference model
_(PPM); (3) a self-evolution recipe in which the policy SLM and PPM are built_
from scratch and iteratively evolved to improve reasoning capabilities. Through
4 rounds of self-evolution with millions of synthesized solutions for 747k math
problems, rStar-Math boosts SLMs’ math reasoning to state-of-the-art levels. On
the MATH benchmark, it improves Qwen2.5-Math-7B from 58.8% to 90.0% and
Phi3-mini-3.8B from 41.4% to 86.4%, surpassing o1-preview by +4.5% and +0.9%.
On the USA Math Olympiad (AIME), rStar-Math solves an average of 53.3% (8/15)
of problems, ranking among the top 20% the brightest high school math students.
[Code and data will be available at https://github.com/microsoft/rStar.](https://github.com/microsoft/rStar)
Task **rStar-Math** **rStar-Math** **rStar-Math** **OpenAI** **OpenAI** **QWQ**
(pass@1 Acc) **(Qwen-7B)** **(Qwen-1.5B)** **(Phi3-mini)** **o1-preview** **o1-mini** **32B-preview [GPT-4o DeepSeek-V3]**
MATH 90.0 88.6 86.4 85.5 90.0 90.6 76.6 90.2
AIME 2024 53.3 46.7 43.3 44.6 56.7 50.0 9.3 39.2
Olympiad Bench 65.6 64.6 60.3 - 65.3 61.2 43.3 55.4
College Math 60.5 59.3 59.1 - 57.8 55.8 48.5 58.9
Omni-Math 50.5 48.5 46.0 52.5 60.5 49.6 30.5 35.9
Table 1: rStar-Math enables frontier math reasoning in SLMs via deep thinking over 64 trajectories.
### 1 Introduction
Recent studies have demonstrated that large language models (LLMs) are capable of tackling
mathematical problems [Team, 2024a, Yang et al., 2024, OpenAI, 2024, Liu et al., 2024]. However,
the conventional approach of having LLMs generate complete solutions in a single inference – akin to
System 1 thinking [Daniel, 2011] – often yields fast but error-prone results [Valmeekam et al., 2023,
OpenAI, 2023]. In response, test-time compute scaling [Snell et al., 2024, Qi et al., 2024] suggests a
paradigm shift toward a System 2-style thinking, which emulates human reasoning through a slower
and deeper thought process. In this paradigm, an LLM serves as a policy model to generate multiple
math reasoning steps, which are then evaluated by another LLM acting as a reward model [OpenAI,
2024]. The steps and solutions deemed more likely to be correct are selected. The process repeats
iteratively and ultimately derives the final answer.
_∗Equal contribution._
_⋄Project leader; correspondence to [email protected]_
_§Xinyu Guan and Youran Sun did this work during the internship at MSRA. Xinyu Guan_
([email protected]) is with Peking University, Youran Sun is with Tsinghua University.
-----
|-0.7|-0.5|0.6|
|---|---|---|
|-1|-1|1|
|---|---|---|
|MCTS-driven deep thinking question SLM PPM 0.8 0.7 Apply Verifiers (PPM/ python) One step 0.5 -0.7 -0.5 0.6 Answer step (correct) Answer step 1 -1 -1 1 (wrong)|.. .. .. .. .. … Q-value filtering Step 1 Step 2 final step full solutions (b) Construction of per-step preference pairs based on Q-values Terminal-guided Terminal-guidedSLM-r2PPM-augmentedSLM-r3PPM-augmentedSLM-r4 MCTS SLM-r1 MCTS PPM-r2 MCTS PPM-r3 MCTS PPM-r4 Round 1 Round 2 Round 3 Round 4|
|---|---|
(a) step-by-step verified reasoning trajectory
(c) 4 rounds of self-evolution
Figure 1: The overview of rStar-Math.
In the test-time compute paradigm, the key is to train a powerful policy model that generates promising
solution steps and a reliable reward model that accurately evaluates them, both of which depend
on high-quality training data. Unfortunately, it is well-known that off-the-shelf high-quality math
reasoning data is scarce, and synthesizing high-quality math data faces fundamental challenges. For
the policy model, it is challenging to distinguish erroneous reasoning steps from the correct ones,
complicating the elimination of low-quality data. It is worth noting that in math reasoning, a correct
final answer does not ensure the correctness of the entire reasoning trace [Lanham et al., 2023].
Incorrect intermediate steps significantly decrease data quality. As for the reward model, process
reward modeling (PRM) shows a great potential by providing fine-grained feedback on intermediate
steps [Lightman et al., 2023]. However, the training data is even scarcer in this regard: accurate
step-by-step feedback requires intense human labeling efforts and is impractical to scale, while those
automatic annotation attempts show limited gains due to noisy reward scores [Luo et al., 2024, Wang
et al., 2024c, Chen et al., 2024]. Due to the above challenges, existing distill-based data synthesis
approaches to training policy models, e.g., scaling up GPT4-distilled CoT data [Tang et al., 2024,
Huang et al., 2024], have shown diminishing returns and cannot exceed the capability of their teacher
model; meanwhile, as of today, training reliable PRMs for math reasoning remains an open question.
In this work, we introduce rStar-Math, a self-evolvable System 2-style reasoning approach that
achieves the state-of-the-art math reasoning, rivaling and sometimes even surpassing OpenAI o1 on
challenging math competition benchmarks with a model size as small as 7 billion. Unlike solutions
relying on superior LLMs for data synthesis, rStar-Math leverages smaller language models (SLMs)
with Monte Carlo Tree Search (MCTS) to establish a self-evolutionary process, iteratively generating
higher-quality training data. To achieve self-evolution, rStar-Math introduces three key innovations.
First, a novel code-augmented CoT data synthesis method, which performs extensive MCTS rollouts to
generate step-by-step verified reasoning trajectories with self-annotated MCTS Q-values. Specifically,
math problem-solving is decomposed into multi-step generation within MCTS. At each step, the
SLM serving as the policy model samples candidate nodes, each generating a one-step CoT and the
corresponding Python code. To verify the generation quality, only nodes with successful Python
code execution are retained, thus mitigating errors in intermediate steps. Moreover, extensive MCTS
rollouts automatically assign a Q-value to each intermediate step based on its contribution: steps
contributing to more trajectories that lead to the correct answer are given higher Q-values and
considered higher quality. This ensures that the reasoning trajectories generated by SLMs consist of
correct, high-quality intermediate steps.
Second, a novel method that trains an SLM acting as a process preference model, i.e., a PPM to
implement the desired PRM, that reliably predicts a reward label for each math reasoning step. The
PPM leverages the fact that, although Q-values are still not precise enough to score each reasoning
step despite using extensive MCTS rollouts, the Q-values can reliably distinguish positive (correct)
steps from negative (irrelevant/incorrect) ones. Thus the training method constructs preference pairs
for each step based on Q-values and uses a pairwise ranking loss [Ouyang et al., 2022] to optimize
PPM’s score prediction for each reasoning step, achieving reliable labeling. This approach avoids
conventional methods that directly use Q-values as reward labels [Luo et al., 2024, Chen et al., 2024],
which are inherently noisy and imprecise in stepwise reward assignment.
Finally, a four-round self-evolution recipe that progressively builds both a frontier policy model
and PPM from scratch. We begin by curating a dataset of 747k math word problems from publicly
available sources. In each round, we use the latest policy model and PPM to perform MCTS,
-----
generating increasingly high-quality training data using the above two methods to train a stronger
policy model and PPM for next round. Each round achieves progressive refinement: (1) a stronger
policy SLM, (2) a more reliable PPM, (3) generating better reasoning trajectories via PPM-augmented
MCTS, and (4) improving training data coverage to tackle more challenging and even competitionlevel math problems.
Extensive experiments across four SLMs (1.5B-7B) and seven math reasoning tasks demonstrate
the effectiveness of rStar-Math. Remarkably, rStar-Math improves all four SLMs, matching or even
surpassing OpenAI o1 on challenging math benchmarks. On MATH benchmark, with 8 search
trajectories, rStar-Math boosts Qwen2.5-Math-7B from 58.8% to 89.4% and Qwen2.5-Math-1.5B
from 51.2% to 87.8%. With 64 trajectories, the scores rise to 90% and 88.4%, outperforming
o1-preview by 4.5% and 2.6% and matching o1-mini’s 90%. On the Olympiad-level AIME 2024,
rStar-Math solves on average 53.3% (8/15) of the problems, exceeding o1-preview by 8.7% and all
other open-sourced LLMs. We further conduct comprehensive experiments to verify the superiority
of step-by-step verified reasoning trajectories over state-of-the-art data synthesis baselines, as well as
the PPM’s effectiveness compared to outcome reward models and Q value-based PRMs. Finally, we
present key findings from rStar-Math deep thinking, including the intrinsic self-reflection capability
and PPM’s preference for theorem-applications intermediate steps.
### 2 Related Works
**Math Data Synthesis. Advancements in LLM math reasoning have largely relied on curating**
high-quality CoT data, with most leading approaches being GPT-distilled, using frontier models like
GPT-4 for synthesis [Wang et al., 2024b, Gou et al., 2023, Luo et al., 2023]. Notable works include
NuminaMath [Jia LI and Polu, 2024a] and MetaMath [Yu et al., 2023b]. While effective, this limits
reasoning to the capabilities of the teacher LLM. Hard problems that the teacher LLM cannot solve
are excluded in the training set. Even solvable problems may contain error-prone intermediate steps,
which are hard to detect. Although rejection sampling methods [Yuan et al., 2023, Brown et al., 2024]
can improve data quality, they do not guarantee correct intermediate steps. As a result, scaling up CoT
data has diminishing returns, with gains nearing saturation—e.g., OpenMathInstruct-2 [Toshniwal
et al., 2024] only sees a 3.9% boost on MATH despite an 8× increase in dataset size.
**Scaling Test-time Compute has introduced new scaling laws, allowing LLMs to improve perfor-**
mance across by generating multiple samples and using reward models for best-solution selection
[Snell et al., 2024, Wu et al., 2024, Brown et al., 2024]. Various test-time search methods have been
proposed [Kang et al., 2024, Wang et al., 2024a], including random sampling [Wang et al., 2023]
and tree-search methods [Yao et al., 2024, Hao et al., 2023, Zhang et al., 2024b, Qi et al., 2024] like
MCTS. However, open-source methods for scaling test-time computation have shown limited gains in
math reasoning, often due to policy LLM or reward model limitations. rStar-Math addresses this by
iteratively evolving the policy LLM and reward model, achieving System 2 mathematical reasoning
performance comparable to OpenAI o1 [OpenAI, 2024].
**Reward Models are crucial for effective System 2 reasoning but are challenging to obtain. Recent**
works include LLM-as-a-Judge for verification [Zheng et al., 2023, Qi et al., 2024] and specialized
reward models like Outcome Reward Model [Yang et al., 2024, Yu et al., 2023a] and Process Reward
Model (PRM) [Lightman et al., 2024]. While PRMs offer promising dense, step-level reward signals
for complex reasoning [Luo et al., 2024, Wang et al., 2024c], collecting step-level annotations
remains an obstacle. While Kang et al. [2024], Wang et al. [2024a] rely on costly human-annotated
datasets like PRM800k [Lightman et al., 2024], recent approaches [Wang et al., 2024c, Luo et al.,
2024] explore automated annotation via Monte Carlo Sampling or MCTS. However, they struggle to
generate precise reward scores, which limits performance gains. rStar-Math introduces a novel process
preference reward (PPM) that eliminates the need for accurate step-level reward score annotation.
### 3 Methodology
**3.1** **Design Choices**
**MCTS for Effective System 2 Reasoning. We aim to train a math policy SLM and a process reward**
model (PRM), and integrating both within Monte Carlo Tree Search (MCTS) for System 2 deep
thinking. MCTS is chosen for two key reasons. First, it breaks down complex math problems into
simpler single-step generation tasks, reducing the difficulty for the policy SLM compared to other
-----
System 2 methods like Best-of-N [Brown et al., 2024] or self-consistency [Wang et al., 2023], which
require generating full solutions in one inference. Second, the step-by-step generation in MCTS
naturally yields step-level training data for both models. Standard MCTS rollout automatically assign
Q-value to each step based on its contribution to the final correct answer, obviating the need for
human-generated step-level annotations for process reward model training.
Ideally, advanced LLMs such as GPT-4 could be integrated within MCTS to generate training
data. However, this approach faces two key challenges. First, even these powerful models struggle
to consistently solve difficult problems, such as Olympiad-level mathematics. Consequently, the
resulting training data would primarily consist of simpler solvable problems, limiting its diversity and
quality. Second, annotating per-step Q-values demands extensive MCTS rollouts; insufficient tree
exploration can lead to spurious Q-value assignments, such as overestimating suboptimal steps. Given
that each rollout involves multiple single-step generations and these models are computationally
expensive, increasing rollouts significantly raises inference costs.
**Overview. To this end, we explore using two 7B SLMs (a policy SLM and a PRM) to generate higher-**
quality training data, with their smaller size allowing for extensive MCTS rollouts on accessible
hardware (e.g., 4 40GB A100 GPUs). However, self-generating data presents greater challenges for
_×_
SLMs, due to their weaker capabilities. SLMs frequently fail to generate correct solutions, and even
when the final answer is correct, the intermediate steps are often flawed or of poor quality. Moreover,
SLMs solve fewer challenging problems compared to advanced models like GPT-4.
This section introduces our methodology, as illustrated in Fig. 1. To mitigate errors and low-quality
intermediate steps, we introduce a code-augmented CoT synthetic method, which performs extensive
MCTS rollouts to generate step-by-step verified reasoning trajectories, annotated with Q-values. To
further improve SLM performance on challenging problems, we introduce a four-round self-evolution
recipe. In each round, both the policy SLM and the reward model are updated to stronger versions,
progressively tackling more difficult problems and generating higher-quality training data. Finally, we
present a novel process reward model training approach that eliminates the need for precise per-step
reward annotations, yielding the more effective process preference model (PPM).
**3.2** **Step-by-Step Verified Reasoning Trajectory**
We start by introducing our method for generating step-by-step verified reasoning trajectories with
per-step Q-value annotations. Given a problem x and a policy model M, we run the standard MCTS
to incrementally construct a search tree for step-by-step solution exploration. As shown in Fig. 1(a),
the root node represents question x, while child nodes correspond to intermediate steps s generated
by M . A root-to-leaf path ending at terminal node sd forms a trajectory t = x ⊕ _s1 ⊕_ _s2 ⊕_ _... ⊕_ _sd,_
with each step si assigned a Q-value Q(si). From the search tree T, we extract solution trajectories
T = {t[1], t[2], ..., t[n]}(n ≥ 1). Our goal is to select high-quality trajectories from T to construct the
training set. For this purpose, we introduce code-augmented CoT synthesis method to filter out
low-quality generations and perform extensive rollouts to improve the reliability of Q-value accuracy.
**Code-augmented CoT Generation. Prior MCTS approaches primarily generate natural language**
(NL) CoTs [Qi et al., 2024, Zhang et al., 2024a]. However, LLMs often suffer from hallucination,
producing incorrect or irrelevant steps yet still arrive at the correct answer by chance [Lanham et al.,
2023]. These flawed steps are challenging to detect and eliminate. To address this, we propose a
novel code execution augmented CoT. As shown in Fig. 2, the policy model generates a one-step NL
CoT alongside its corresponding Python code, where the NL CoT is embedded as a Python comment.
Only generations with successfully executed Python code are retained as valid candidates.
**Question: Bill walks $\frac{1}{2}$ mile south, then $\frac{3}{4}$ mile east, and finally $\frac{1}{2}$ mile south. How many miles is he, in a direct line,**
from his starting point? Express your answer as a decimal to the nearest hundredth.
_Python code execution for step 1:_
# Step 1: Calculate the total distance walked south NL CoT as Python Comment
total_south = 1/2 +1/2 # Step 1: Calculate the total distance walked south
# Step 2: Calculate the total distance walked east total_south = 1/2 +1/2
total_east = 3/4
# Step 3: Use the Pythagorean theorem to find the direct distance from the starting point _Python code execution for step 2:_
import math
direct_distance = math.sqrt(total_south**2 + total_east**2) # Step 1: Calculate the total distance walked south
# Step 4: Round the direct distance to the nearest hundredth total_south = 1/2 +1/2
direct_distance_rounded = round(direct_distance, 2 # Step 2: Calculate the total distance walked east
From the result, we can see that the direct distance from the starting point is \boxed{1.25} miles total_east = 3/4
…
Figure 2: An example of Code-augmented CoT.
-----
Specifically, starting from the initial root node x, we perform multiple MCTS iterations through
_selection, expansion, rollout, and back-propagation. At step i, we collect the latest reasoning trajectory_
_x ⊕_ _s1 ⊕_ _s2 ⊕_ _... ⊕_ _si−1 as the current state. Based on this state, we prompt (see Appendix A.3)_
the policy model to generate n candidates si,0, ..., si,n−1 for step i. Python code execution is then
employed to filter valid nodes. As shown in Fig. 2, each generation si,j is concatenated with the code
from all previous steps, forming s1 ⊕ _s2 ⊕_ _... ⊕_ _si−1 ⊕_ _si,j. Candidates that execute successfully_
are retained as valid nodes and scored by the PPM, which assigns a Q-value q(si). Then, we use the
well-known Upper Confidence bounds for Trees (UCT) [Kocsis and Szepesvári, 2006] to select the
best node among the n candidates. This selection process is mathematically represented as:
�
ln Nparent(s) ; where _Q(s) =_ _[q][(][s][)]_ (1)
_N_ (s) _N_ (s)
UCT(s) = Q(s) + c
where N (s) denotes the number of visits to node s, and Nparent(s) is the visit count of s’s parent node.
The predicted reward q(s) is provided by the PPM and will be updated through back-propagation. c
is a constant that balances exploitation and exploration.
**Extensive Rollouts for Q-value Annotation. Accurate Q-value Q(s) annotation in Eq. 1 is crucial**
for guiding MCTS node selection towards correct problem-solving paths and identifying high-quality
steps within trajectories. To improve Q-value reliability, we draw inspiration from Go players, who
retrospectively evaluate the reward of each move based on game outcomes. Although initial estimates
may be imprecise, repeated gameplay refines these evaluations over time. Similarly, in each rollout,
we update the Q-value of each step based on its contribution to achieving the correct final answer.
After extensive MCTS rollouts, steps consistently leading to correct answers achieve higher Q-values,
occasional successes yield moderate Q-values, and consistently incorrect steps receive low Q-values.
Specifically, we introduce two self-annotation methods to obtain these step-level Q-values. Fig. 1(c)
shows the detailed setting in the four rounds of self-evolution.
_Terminal-guided annotation. During the first two rounds, when the PPM is unavailable or insufficiently_
accurate, we use terminal-guided annotation. Formally, let q(si)[k] denote the q value for step si after
back-propagation in the k[th] rollout. Following AlphaGo [Silver et al., 2017] and rStar [Qi et al.,
2024], we score each intermediate node based on its contribution to the final correct answer:
_q(si)[k]_ = q(si)[k][−][1] + q(sd)[k]; (2)
where the initial q value q(si)[0] = 0 in the first rollout. If this step frequently leads to a correct answer,
its q value will increase; otherwise, it decreases. Terminal nodes are scored as q(sd) = 1 for correct
answers and q(sd) = −1 otherwise, as shown in Fig. 1.
_PRM-augmented annotation. Starting from the third round, we use PPM to score each step for more_
effective generation. Compared to terminal-guided annotation, which requires multiple rollouts for
a meaningful q value, PPM directly predicts a non-zero initial q value. PPM-augmented MCTS
also helps the policy model to generate higher-quality steps, guiding solutions towards correct paths.
Formally, for step si, PPM predicts an initial q(si)[0] value based on the partial trajectory:
_q(si)[0]_ = PPM (x ⊕ _s1 ⊕_ _s2 ⊕_ _... ⊕_ _si−1 ⊕_ _si)_ (3)
This q value will be updated based on terminal node’s q(sd) value through MCTS back-propagation
in Eq. 2. For terminal node sd, we do not use PRM for scoring during training data generation.
Instead, we assign a more accurate score based on ground truth labels as terminal-guided rewarding.
**3.3** **Process Preference Model**
Process reward models, which provide granular step-level reward signals, is highly desirable for
solving challenging math problems. However, obtaining high-quality step-level training data remains
an open challenge. Existing methods rely on human annotations [Lightman et al., 2023] or MCTSgenerated scores [Zhang et al., 2024a, Chen et al., 2024] to assign a score for each step. These
scores then serve as training targets, with methods such as MSE loss [Chen et al., 2024] or pointwise
loss [Wang et al., 2024c, Luo et al., 2024, Zhang et al., 2024a] used to minimize the difference
between predicted and labeled scores. As a result, the precision of these annotated step-level reward
scores directly determines the effectiveness of the resulting process reward model.
Unfortunately, precise per-step scoring remains a unsolved challenge. Although our extensive MCTS
rollouts improve the reliability of Q-values, precisely evaluating fine-grained step quality presents a
-----
major obstacle. For instance, among a set of correct steps, it is difficult to rank them as best, secondbest, or average and then assign precise scores. Similarly, among incorrect steps, differentiating
the worst from moderately poor steps poses analogous challenges. Even expert human annotation
struggles with consistency, particularly at scale, leading to inherent noise in training labels.
We introduce a novel training method that trains a process preference model (PPM) by constructing
step-level positive-negative preference pairs. As shown in Fig. 1(b), instead of using Q-values as
direct reward labels, we use them to select steps from MCTS tree for preference pair construction.
For each step, we select two candidates with the highest Q-values as positive steps and two with the
lowest as negative steps. Critically, the selected positive steps must lead to a correct final answer,
while negative steps must lead to incorrect answers. For intermediate steps (except the final answer
step), the positive and negative pairs share the same preceding steps. For the final answer step, where
identical reasoning trajectories rarely yield different final answers, we relax this restriction. We select
two correct trajectories with the highest average Q-values as positive examples and two incorrect
trajectories with the lowest average Q-values as negative examples. Following [Ouyang et al., 2022],
we define our loss function using the standard Bradley-Terry model with a pairwise ranking loss:
1
_Lppm(θ) = −_ 2 2 _[E][(][x,y]i[pos],yi[neg]∈D)[[][log][(][σ][(][r]θ[(][x, y]i[pos]) −_ _rθ(x, yi[neg])))]_ (4)
_×_
when i is not final answer step, yi[pos] = s1 ⊕ _... ⊕_ _si−1 ⊕_ _s[pos]i_ ; yi[neg] = s1 ⊕ _... ⊕_ _si−1 ⊕_ _s[neg]i_ (5)
Here, rθ(x, yi) denotes the output of the PPM, where x is the problem and y is the trajectory from
the first step to the i[th] step.
**3.4** **Self-Evolved Deep Thinking**
**3.4.1** **Training with Step-by-Step Verified Reasoning Trajectory**
**Math Problems Collection. We collect a large dataset of 747k math word problems with final**
answer ground-truth labels, primarily from NuminaMath [Jia LI and Polu, 2024a] and MetaMath [Yu
et al., 2023b]. Notably, only competition-level problems (e.g., Olympiads and AIME/AMC) from
NuminaMath are included, as we observe that grade-school-level problems do not significantly
improve LLM complex math reasoning. To augment the limited competition-level problems, we
follow [Li et al., 2024] and use GPT-4 to synthesize new problems based on the seed problems in 7.5k
MATH train set and 3.6k AMC-AIME training split. However, GPT-4 often generated unsolvable
problems or incorrect solutions for challenging seed problems. To filter these, we prompt GPT-4 to
generate 10 solutions per problem, retaining only those with at least 3 consistent solutions.
**Reasoning Trajectories Collection. Instead of using the original solutions in the 747k math dataset,**
we conduct extensive MCTS rollouts (Sec. 3.2) to generate higher-quality step-by-step verified
reasoning trajectories. In each self-evolution round, we perform 16 rollouts per math problem, which
leads to 16 reasoning trajectories. Problems are then categories by difficulty based on the correct ratio
of the generated trajectories: easy (all solutions are correct), medium (a mix of correct and incorrect
solutions) and hard (all solutions are incorrect). For hard problems with no correct trajectories, an
additional MCTS with 16 rollouts is performed. After that, all step-by-step trajectories and their
annotated Q-values are collected and filtered to train the policy SLM and process preference model.
**Supervised Fine-tuning the Policy SLM. Through extensive experiments, we find that selecting**
high-quality reasoning trajectories is the key for fine-tuning a frontier math LLM. While methods
such as GPT-distillation and Best-of-N can include low-quality or erroneous intermediate steps, a
more effective approach ensures that every step in the trajectory is of high quality. To achieve this,
we use per-step Q-values to select optimal trajectories from MCTS rollouts. Specifically, for each
math problem, we select the top-2 trajectories with the highest average Q-values among those leading
to correct answers as SFT training data.
**Training PPM. The PPM is initialized from the fine-tuned policy model, with its next-token prediction**
head replaced by a scalar-value head consisting of a linear layer and a tanh function to constrain
outputs to the range [-1, 1]. We filter out math problems where all solution trajectories are fully
correct or incorrect. For problems with mixed outcomes, we select two positive and two negative
examples for each step based on Q-values, which are used as preference pairs for training data.
**3.4.2** **Recipe for Self-Evolution**
Due to the weaker capabilities of SLMs, we perform four rounds of MCTS deep thinking to progressively generate higher-quality data and expand the training set with more challenging math problems.
-----
Table 2: Percentage of the 747k math problems correctly solved in each round. Only problems have
correct solutions are included in the training set. The first round uses DeepSeek-Coder-Instruct as the
policy LLM, while later rounds use our fine-tuned 7B policy SLM.
# models in MCTS GSM-level MATH-level Olympiad-level All
Round 1 DeepSeek-Coder-V2-Instruct 96.61% 67.36% 20.99% 60.17%
Round 2 policy SLM-r1 97.88% 67.40% 56.04% 66.60%
Round 3 policy SLM-r2, PPM-r2 98.15% 88.69% 62.16% 77.86%
Round 4 policy SLM-r3, PPM-r3 98.15% 94.53% 80.58% 90.25%
Table 3: Pass@1 accuracy of the resulting policy SLM in each round, showing continuous improvement until surpassing the bootstrap model.
Round# MATH AIME 2024 AMC 2023 Olympiad Bench College Math GSM8K GaokaoEn 2023
DeepSeek-Coder-V2-Instruct
75.3 13.3 57.5 37.6 46.2 94.9 64.7
(bootstrap model)
Base (Qwen2.5-Math-7B) 58.8 0.0 22.5 21.8 41.6 91.6 51.7
policy SLM-r1 69.6 3.3 30.0 34.7 44.5 88.4 57.4
policy SLM-r2 73.6 10.0 35.0 39.0 45.7 89.1 59.7
policy SLM-r3 75.8 16.7 45.0 44.1 49.6 89.3 62.8
policy SLM-r4 78.4 26.7 47.5 47.1 52.5 89.7 65.7
Each round uses MCTS to generate step-by-step verified reasoning trajectories, which are then used
to train the new policy SLM and PPM. The new models are then applied in next round to generate
higher-quality training data. Fig. 1(c) and Table 2 detail the models used for data generation in each
round, along with the identifiers of the trained policy model and PPM. Next, we outline the details
and specific improvements targeted in each round.
**Round 1: Bootstrapping an initial strong policy SLM-r1. To enable SLMs to self-generate**
reasonably good training data, we perform a bootstrap round to fine-tune an initial strong policy
model, denoted as SLM-r1. As shown in Table 2, we run MCTS with DeepSeek-Coder-V2-Instruct
(236B) to collect the SFT data. With no available reward model in this round, we use terminal-guided
annotation for Q-values and limit MCTS to 8 rollouts for efficiency. For correct solutions, the top-2
trajectories with the highest average Q-values are selected as SFT data. We also train PPM-r1, but the
limited rollouts yields unreliable Q-values, affecting the effectiveness of PPM-r1 ( Table 4).
**Round 2: Training a reliable PPM-r2. In this round, with the policy model updated to the 7B**
SLM-r1, we conduct extensive MCTS rollouts for more reliable Q-value annotation and train the
first reliable reward model, PPM-r2. Specifically, we perform 16 MCTS rollouts per problem. The
resulting step-by-step verified reasoning trajectories show significant improvements in both quality
and Q-value precision. As shown in Table 4, PPM-r2 is notably more effective than in the bootstrap
round. Moreover, the policy SLM-r2 also continues to improve as expected (Table 3).
**Round 3: PPM-augmented MCTS to significantly improve data quality. With the reliable**
PPM-r2, we perform PPM-augmented MCTS in this round to generate data, leading to significantly
higher-quality trajectories that cover more math and Olympiad-level problems in the training set
(Table 2). The generated reasoning trajectories and self-annotated Q-values are then used to train the
new policy SLM-r3 and PPM-r3, both of which show significant improvements.
**Round 4: Solving challenging math problems. After the third round, while grade school and**
MATH problems achieve high success rates, only 62.16% of Olympiad-level problems are included in
the training set. This is NOT solely due to weak reasoning abilities in our SLMs, as many Olympiad
problems remain unsolved by GPT-4 or o1. To improve coverage, we adopt a straightforward strategy.
For unsolved problems after 16 MCTS rollouts, we perform an additional 64 rollouts, and if needed,
increase to 128. We also conduct multiple MCTS tree expansions with different random seeds. This
boosts the success rate of Olympiad-level problems to 80.58%.
After four rounds of self-evolution, 90.25% of the 747k math problems are successfully covered into
the training set, as shown in Table 2. Among the remaining unsolved problems, a significant portion
consists of synthetic questions. We manually review a random sample of 20 problems and find that 19
are incorrectly labeled with wrong answers. Based on this, we conclude that the remaining unsolved
problems are of low quality and thus terminate the self-evolution at round 4.
-----
Table 4: The quality of PPM consistently improves across rounds. The policy model has been fixed
with policy SLM-r1 for a fair comparison.
Round# MATH AIME 2024 AMC 2023 Olympiad Bench College Math GSM8K GaokaoEn 2023
PPM-r1 75.2 10.0 57.5 35.7 45.4 90.9 60.3
PPM-r2 84.1 26.7 75.0 52.7 54.2 93.3 73.0
PPM-r3 85.2 33.3 77.5 59.5 55.6 93.9 76.6
PPM-r4 87.0 43.3 77.5 61.5 56.8 94.2 77.8
### 4 Evaluation
**4.1** **Setup**
**Evaluation Datasets. We evaluate rStar-Math on diverse mathematical benchmarks. In addition to**
the widely-used GSM8K [Cobbe et al., 2021], we include challenging benchmarks from multiple
domains: (i) competition and Olympiad-level benchmarks, such as MATH-500 [Lightman et al.,
2023], AIME 2024 [AI-MO, 2024a], AMC 2023 [AI-MO, 2024b] and Olympiad Bench [He et al.,
2024]. Specifically, AIME is the exams designed to challenge the brightest high school math students
in American, with the 2024 dataset comprising 30 problems from AIME I and II exams; (ii) collegelevel math problems from College Math [Tang et al., 2024] and (iii) out-of-domain math benchmark:
GaoKao (Chinese College Entrance Exam) En 2023 [Liao et al., 2024].
**Base Models and Setup. rStar-Math is a general approach applicable to various LLMs. To show**
its effectiveness and generalizability, we use SLMs of different sizes as the base policy models:
Qwen2.5-Math-1.5B [Qwen, 2024b], Phi3-mini-Instruct (3B) [Microsoft, 2024, Abdin et al., 2024],
Qwen2-Math-7B [Qwen, 2024a] and Qwen2.5-Math-7B [Qwen, 2024c]. Among these, Phi3-miniInstruct is a general-purpose SLM without specialization in math reasoning.
Due to limited GPU resources, we performed 4 rounds of self-evolution exclusively on Qwen2.5Math-7B, yielding 4 evolved policy SLMs (Table 3) and 4 PPMs (Table 4). For the other 3 policy
LLMs, we fine-tune them using step-by-step verified trajectories generated from Qwen2.5-Math-7B’s
4th round. The final PPM from this round is then used as the reward model for the 3 policy SLMs.
**Baselines. rStar-Math is a System 2 method. We compare it against three strong baselines representing**
both System 1 and System 2 approaches: (i) Frontier LLMs, including GPT-4o, the latest Claude,
OpenAI o1-preview and o1-mini. We measure their accuracy on AMC 2023, Olympiad Bench,
College Math, Gaokao and GSM8K, with accuracy numbers for other benchmarks are taken from
public technical reports [Team, 2024a]. (ii) Open-sourced superior reasoning models, including
DeepSeek-Coder-v2-Instruct, Mathstral [Team, 2024b], NuminaMath-72B [Jia LI and Polu, 2024a],
and LLaMA3.1 [Dubey et al., 2024], which represent the current mainstream System 1 approaches
for improving LLM math reasoning. (iii) Both System 1 and System 2 performance of the base models
_trained from the original models teams, including Instruct versions (e.g., Qwen2.5-Math-7B-Instruct)_
and Best-of-N (e.g., Qwen2.5-Math-72B-Instruct+Qwen2.5-Math-RM-72B). Notably, the reward
model used for the three Qwen base models is a 72B ORM, significantly larger than our 7B PPM.
**Evaluation Metric. We report Pass@1 accuracy for all baselines. For System 2 baselines, we use**
default evaluation settings, such as default thinking time for o1-mini and o1-preview. For Qwen
models with Best-of-N, we re-evaluate MATH-500, AIME/AMC accuracy; other benchmarks results
are from their technical reports. For a fair comparison, rStar-Math run MCTS to generate the
same number of solutions as Qwen. Specifically, for AIME/AMC, we generate 16 trajectories for
AIME/AMC and 8 for other benchmarks, using PPM to select the best solution. We also report
performance with increased test-time computation using 64 trajectories, denoted as rStar-Math[64].
**4.2** **Main Results**
**Results on diverse challenging math benchmarks. Table 5 shows the results of rStar-Math with**
comparing to state-of-the-art reasoning models. We highlight three key observations: (1) rStar-Math
significantly improves SLMs math reasoning capabilities, achieving performance comparable to
or surpassing OpenAI o1 with substantially smaller model size (1.5B-7B). For example, Qwen2.5Math-7B, originally at 58.8% accuracy on MATH, improved dramatically to 90.0% with rStar-Math,
outperforming o1-preview and Claude 3.5 Sonnet while matching o1-mini. On the College Math
benchmark, rStar-Math exceeds o1-mini by 2.7%. On AIME 2024, rStar-Math scored 53.3%, ranking
just below o1-mini, with the 7B model solving 8/15 problems in both AIME I and II, placing in
the top 20% of the brightest high school math students. Notably, 8 of the unsolved problems were
-----
Table 5: The results of rStar-Math and other frontier LLMs on the most challenging math benchmarks.
rStar-Math[64] shows the Pass@1 accuracy achieved when sampling 64 trajectories.
**Competition and College Level** **OOD**
AMC Olympiad College
Model Method MATH [AIME] GSM8K [Gaokao]
2024 2023 Bench Math En 2023
_Frontier LLMs_
GPT-4o System 1 76.6 9.3 47.5 43.3 48.5 92.9 67.5
Claude3.5-Sonnet System 1 78.3 16.0 - - - 96.4 GPT-o1-preview - 85.5 44.6 90.0 - - - GPT-o1-mini - **90.0** **56.7** **95.0** **65.3** 57.8 94.8 78.4
_Open-Sourced Reasoning LLMs_
DeepSeek-Coder-V2-Instruct System 1 75.3 13.3 57.5 37.6 46.2 94.9 64.7
Mathstral-7B-v0.1 System 1 57.8 0.0 37.5 21.5 33.7 84.9 46.0
NuminaMath-72B-CoT System 1 64.0 3.3 70.0 32.6 39.7 90.8 58.4
LLaMA3.1-8B-Instruct System 1 51.4 6.7 25.0 15.4 33.8 76.6 38.4
LLaMA3.1-70B-Instruct System 1 65.4 23.3 50.0 27.7 42.5 94.1 54.0
Qwen2.5-Math-72B-Instruct System 1 85.6 30.0 70.0 49.0 49.5 95.9 71.9
Qwen2.5-Math-72B-Instruct+72B ORM System 2 85.8 36.7 72.5 54.5 50.6 96.4 76.9
_General Base Model: Phi3-mini-Instruct (3.8B)_
Phi3-mini-Instruct (base model) System 1 41.4 3.33 7.5 12.3 33.1 85.7 37.1
**rStar-Math (3.8B SLM+7B PPM)** System 2 **85.4** **40.0** **77.5** **59.3** **58.0** **94.5** **77.1**
**rStar-Math[64]** **(3.8B SLM+7B PPM)** System 2 **86.4** **43.3** **80.0** **60.3** **59.1** **94.7** **77.7**
_Math-Specialized Base Model: Qwen2.5-Math-1.5B_
Qwen2.5-Math-1.5B (base model) System 1 51.2 0.0 22.5 16.7 38.4 74.6 46.5
Qwen2.5-Math-1.5B-Instruct System 1 60.0 10.0 60.0 38.1 47.7 84.8 65.5
Qwen2.5-Math-1.5B-Instruct+72B ORM System 2 83.4 20.0 72.5 47.3 50.2 94.1 73.0
**rStar-Math (1.5B SLM+7B PPM)** System 2 **87.8** **46.7** **80.0** **63.5** **59.0** **94.3** **77.7**
**rStar-Math[64]** **(1.5B SLM+7B PPM)** System 2 **88.6** **46.7** **85.0** **64.6** **59.3** **94.8** **79.5**
_Math-Specialized Base Model: Qwen2-Math-7B_
Qwen2-Math-7B (base model) System 1 53.4 3.3 25.0 17.3 39.4 80.4 47.3
Qwen2-Math-7B-Instruct System 1 73.2 13.3 62.5 38.2 45.9 89.9 62.1
Qwen2-Math-7B-Instruct+72B ORM System 2 83.4 23.3 62.5 47.6 47.9 **95.1** 71.9
**rStar-Math (7B SLM+7B PPM)** System 2 **88.2** **43.3** **80.0** **63.1** **58.4** 94.6 **78.2**
**rStar-Math[64]** **(7B SLM+7B PPM)** System 2 **88.6** **46.7** **85.0** **63.4** **59.3** 94.8 **79.2**
_Math-Specialized Base Model: Qwen2.5-Math-7B_
Qwen2.5-Math-7B (base model) System 1 58.8 0.0 22.5 21.8 41.6 91.6 51.7
Qwen2.5-Math-7B-Instruct System 1 82.6 6.0 62.5 41.6 46.8 95.2 66.8
Qwen2.5-Math-7B-Instruct+72B ORM System 2 88.4 26.7 75.0 49.9 49.6 **97.9** 75.1
**rStar-Math (7B SLM+7B PPM)** System 2 **89.4** **50.0** **87.5** **65.3** **59.0** 95.0 **80.5**
**rStar-Math[64]** **(7B SLM+7B PPM)** System 2 **90.0** **53.3** **87.5** **65.6** **60.5** 95.2 **81.3**
geometry-based, requiring visual understanding, a capability rStar-Mathcurrently does not support.
**(2) Despite using smaller policy models (1.5B-7B) and reward models (7B), rStar-Math significantly**
outperforms state-of-the-art System 2 baselines. Compared to Qwen Best-of-N baselines, which
use the same base models (Qwen2-Math-7B, Qwen2.5-Math-1.5B/7B) but a 10 larger reward
_×_
model (Qwen2.5-Math-RM-72B), rStar-Math consistently improves the reasoning accuracy of all
base models to state-of-the-art levels. Even against Best-of-N with a 10 larger Qwen2.5-Math-72B_×_
Instruct policy model, rStar-Math surpasses it on all benchmarks except GSM8K, using the same
number of sampled solutions. (3) Beyond well-known benchmarks like MATH, GSM8K, and AIME,
which may risk over-optimization, rStar-Math shows strong generalizability on other challenging
math benchmarks, including Olympiad Bench, College Math, and the Chinese College Entrance
Math Exam (Gaokao), setting new state-of-the-art scores. As discussed in Sec. 3.4, our training set is
primarily sourced from public datasets, with no specific optimizations for these benchmarks.
**Scaling up test-time computation. rStar-Math uses MCTS to augment the policy model, searching**
solutions guided by the PPM. By increasing test-time computation, it explores more trajectories,
potentially improving performance. In Fig. 3, we show the impact of test-time compute scaling
by comparing the accuracy of the official Qwen Best-of-N across different numbers of sampled
trajectories on four challenging math benchmarks. Sampling only one trajectory corresponds to the
policy LLM’s Pass@1 accuracy, indicating a fallback to System 1 reasoning. We highlight two key
-----
Figure 3: Reasoning performance under scaling up the test-time compute.
observations: (1) With only 4 trajectories, rStar-Math significantly outperforms Best-of-N baselines,
exceeding o1-preview and approaching o1-mini, demonstrating its effectiveness. (2) Scaling test-time
compute improves reasoning accuracy across all benchmarks, though with varying trends. On Math,
AIME, and Olympiad Bench, rStar-Math shows saturation or slow improvement at 64 trajectories,
while on College Math, performance continues to improve steadily.
**4.3** **Ablation Study and Analysis**
We ablate the effectiveness of our three innovations. For System 2-style inference, Pass@1 accuracy
is measured with 16 trajectories for AIME and AMC, and 8 for other benchmarks.
Table 6: The continuously improved math reasoning capabilities through rStar-Math self-evolved
deep thinking. Starting from round 2, the 7B base model powered by rStar-Math surpasses GPT-4o.
Round# MATH AIME 2024 AMC 2023 Olympiad Bench College Math GSM8K GaokaoEn 2023
GPT-4o 76.6 9.3 47.5 43.3 48.5 92.9 67.5
Base 7B model 58.8 0.0 22.5 21.8 41.6 91.6 51.7
rStar-Math Round 1 75.2 10.0 57.5 35.7 45.4 90.9 60.3
rStar-Math Round 2 86.6 43.3 75.0 59.4 55.6 94.0 76.4
rStar-Math Round 3 87.0 46.7 80.0 61.6 56.5 94.2 77.1
rStar-Math Round 4 **89.4** **50.0** **87.5** **65.3** **59.0** **95.0** **80.5**
**The effectiveness of self-evolution. The impressive results in Table 5 are achieved after 4 rounds of**
rStar-Math self-evolved deep thinking. Table 6 shows the math reasoning performance in each round,
demonstrating a continuous improvement in accuracy. In round 1, the main improvement comes from
applying SFT to the base model. Round 2 brings a significant boost with the application of a stronger
PPM in MCTS, which unlocks the full potential of System 2 deep reasoning. Notably, starting from
round 2, rStar-Math outperforms GPT-4o. Rounds 3 and 4 show further improvements, driven by
stronger System 2 reasoning through better policy SLMs and PPMs.
**The effectiveness of step-by-step verified reasoning trajectory. rStar-Math generates step-by-step**
verified reasoning trajectories, which eliminate error intermediate steps and further expand training
set with more challenging problems. To evaluate its effectiveness, we use the data generated from
round 4 as SFT training data and compare it against three strong baselines: (i) GPT-distillation, which
includes open-sourced CoT solutions synthesized using GPT-4, such as MetaMath [Yu et al., 2023b],
NuminaMath-CoT [Jia LI and Polu, 2024b]; (ii) Random sampling from self-generation, which
use the same policy model (i.e., policy SLM-r3) to randomly generate trajectories; (iii) Rejection
sampling, where 32 trajectories are randomly sampled from the policy model, with high-quality
solutions ranked by our trained ORM (appendix A.1). For fairness, we select two correct trajectories
for each math problem in baseline (ii) and (iii). All SFT experiments use the same training recipe.
Table 7 shows the math reasoning accuracy of Qwen2.5-Math-7B fine-tuned on different datasets.
We highlight two observations: (i) Fine-tuning with our step-by-step verified trajectories significantly outperforms all other baselines. This is primarily due to our PPM-augmented MCTS for
code-augmented CoT synthesis, which provides denser verification during math solution generation.
It proves more effective than both random sampling, which lacks verification, and rejection sampling,
-----
Table 7: Ablation study on the effectiveness of our step-by-step verified reasoning trajectories as the
SFT dataset. We report the SFT accuracy of Qwen2.5-Math-7B fine-tuned with different datasets.
Dataset MATH AIME AMC Olympiad Bench College Math GSM8K GaokaoEn 2023
GPT-4o - 76.6 9.3 47.5 43.3 48.5 **92.9** **67.5**
GPT4-distillation MetaMath 55.2 3.33 32.5 19.1 39.2 85.1 43.6
(Open-sourced) NuminaMath-CoT 69.6 10.0 **50.0** 37.2 43.4 89.8 59.5
Self-generation
by policy SLM-r3
Random sample 72.4 10.0 45.0 41.0 48.0 87.5 57.1
Rejection sampling 73.4 13.3 47.5 44.7 50.8 89.3 61.7
**Step-by-step verified (ours) 78.4** **26.7** 47.5 **47.1** **52.5** 89.7 65.7
where ORM provides only sparse verification. (ii) Even randomly sampled code-augmented CoT
solutions from our SLM yields comparable or better performance than GPT-4 synthesized NuminaMath and MetaMath datasets. This indicates that our policy SLMs, after rounds of self-evolution, can
generate high-quality math solutions. These results demonstrates the huge potential of our method to
self-generate higher-quality reasoning data without relying on advanced LLM distillation.
**The effectiveness of PPM. We train both a strong ORM and Q-value score-based PRM (PQM) for**
comparison. To ensure a fair evaluation, we use the highest-quality training data: the step-by-step
verified trajectories generated in round 4, with selected math problems matching those used for PPM
training. Similar to PPM, we use step-level Q-values as to select positive and negative trajectories
for each math problem. The ORM is trained using a pairwise ranking loss [Ouyang et al., 2022],
while the PQM follows [Chen et al., 2024, Zhang et al., 2024a] to use Q-values as reward labels and
optimize with MSE loss. Detailed training settings are provided in Appendix A.1.
Table 8: Ablation study on the reward model. Process reward models (PQM and PPM) outperform
ORM, with PPM pushing the frontier of math reasoning capabilities.
RM Inference MATH AIME AMC Olympiad Bench College Math GSM8K GaokaoEn
o1-mini - **90.0** **56.7** **95.0** **65.3** 55.6 94.8 78.6
ORM Best-of-N 82.6 26.7 65.0 55.1 55.5 92.3 72.5
PQM MCTS 88.2 46.7 85.0 62.9 **57.6** 94.6 **79.5**
PPM MCTS **89.4** **50.0** **87.5** **65.3** **59.0** **95.0** **80.5**
Table 8 compares the performance of ORM, PQM, and PPM for System 2 reasoning using our final
round policy model. ORM provides reward signals only at the end of problem solving, so we use
the Best-of-N method, while PRM and PPM leverage MCTS-driven search. As shown in Table 8,
both PQM and PPM outperform ORM by providing denser step-level reward signals, leading to
higher accuracy on complex math reasoning tasks. However, PQM struggles on more challenging
benchmarks, such as MATH and Olympiad Bench, due to the inherent imprecision of Q-values. In
contrast, PPM constructs step-level preference data for training, enabling our 7B policy model to
achieve comparable or superior performance to o1-mini across all benchmarks.
### 5 Findings and Discussions
**The emergence of intrinsic self-reflection capability. A key breakthrough in OpenAI o1 is its**
intrinsic self-reflection capability. When the model makes an error, it recognizes the mistake and can
self-correct with a correct answer [Noam Brown and Lightman, 2024]. Yet it has consistently been
found to be largely ineffective in open-sourced LLMs. The community has actively explored various
approaches, including self-correction [Huang et al., 2023, Kumar et al., 2024], self-reflection [Renze
and Guven, 2024, Shinn et al., 2024], to explicitly train or prompt LLMs to develop such capability.
In our experiments, we unexpectedly observe that our MCTS-driven deep thinking exhibits selfreflection during problem-solving. As shown in Fig. 4, the model initially formalizes an equation using
SymPy in the first three steps, which would lead to an incorrect answer (left branch). Interestingly,
in the fourth step (right branch), the policy model recognizes the low quality of its earlier steps and
refrains from continuing along the initial problem-solving path. Instead, it backtracks and resolves
the problem using a new, simpler approach, ultimately arriving at the correct answer. An additional
example of self-correction is provided in AppendixA.2. Notably, no self-reflection training data or
prompt was included, suggesting that advanced System 2 reasoning can foster intrinsic self-reflection.
-----
Figure 5: Pass@1 accuracy of policy models and their accuracy after applying System 2 reasoning
with various reward models, shows that reward models primarily determine the final performance.
**PPM shapes the reasoning boundary in System 2 deep thinking. Both the policy and reward mod-**
els are crucial for System 2 deep reasoning. Our experiments show that once the policy model attains
a reasonably strong capability level, (see Appendix A.1 ), the PPM becomes the key determinant of
the upper performance limit. Fig. 5 summarizes the accuracy of policy models of different sizes, as
well as the improvements achieved with reward models. Despite variations in Pass@1 accuracy due
to differences in training strategies, datasets, and model scales, the reward model proves to be the
dominant factor in System 2 reasoning. For instance, although the SFT accuracy of rStar-Math-7B is
lower than Qwen2.5-Math-72B-Instruct, pairing it with our 7B PPM allows rStar-Math to outperform
the 72B policy model with Qwen 72B ORM. Moreover, despite varying Pass@1 accuracy across our
three policy SLM sizes, the final reasoning accuracy converges after applying the PPM.
**PPM spots theorem-application steps. When solving challenging math problems, identifying and**
applying relevant theorems or key conclusions often form the cornerstone of successful problemsolving [Xin et al., 2024]. In our experiments, we find that during rStar-Math problem-solving, our
PPM effectively identifies critical theorem-application intermediate steps within policy model’s deep
thinking process. These steps are predicted with high reward scores, guiding the policy model to
generate the correct solution. Appendix A.2 provides examples where the PPM successfully identifies
key theorems such as Fermat’s little theorem [Weisstein, a], Vieta’s formulas [Weisstein, b], the
AM-GM inequality [amg], the Pythagorean theorem [pyt], and the Shoelace Theorem [sho], etc.
-----
**Generalization discussions. rStar-Math offers a general methodology for improving LLM reasoning**
applicable to various domains. First, rStar-Math can generalize to more challenging math tasks,
such as theorem proving, though its current focus is on word problems due to dataset limitations.
Nonetheless, rStar-Math demonstrates the potential to prove mathematical statements. As shown in
Appendix A.2, it successfully proves an Olympiad-level problem involving Fermat’s Little Theorem,
providing a step-by-step correct proof through its deep reasoning process. Second, rStar-Mathcan
generalize to other domains, such as code and commonsense reasoning. Notably, synthesizing stepby-step verified training trajectories for general reasoning requires a mechanism to provide feedback
on whether a given trajectory reaches the desired output at the end of MCTS rollout. For instance,
in code reasoning, this could involve designing extensive test cases; in general reasoning, feedback
could be obtained through human labeling or mutual verification with another LLM [Qi et al., 2024].
### 6 Conclusion
In this work, we present rStar-Math, a self-evolved System 2 deep thinking approach that significantly
boosts the math reasoning capabilities of small LLMs, achieving state-of-the-art OpenAI o1-level
performance. Our approach demonstrates that SLMs can self-generate high-quality training data
for frontier-level math reasoning. Extensive experiments across four different-sized SLMs and
challenging math benchmarks demonstrate the superiority of rStar-Math, with achieving leading
results while outperforming existing math reasoning LLMs and Best-of-N baselines. We also reveal
key findings, including the emergence of self-reflection and the effectiveness of the PPM in identifying
critical intermediate steps, such as theorem-application steps. Finally, rStar-Math can achieve further
improvements by collecting more challenging math problems, we leave this as future work.
### Acknowledgement
In the early stages of this work, we faced significant challenges due to limited GPU resources and
restricted access to the GPT-4 API. We are deeply grateful to Qiufeng Yin and Chengmin Chi for
their assistance in collecting math problems and providing GPT-4 resources for new math problem
synthesis. Special thanks go to my colleagues, Lingxiao Ma, Ying Cao, Baotong Lu, Jing Liu, Jiahang
Xu, Chengruidong Zhang, Siyuan Wang, Gaokai Zhang, Yujian Li, and Yang Wang, for generously
sharing their GPU quotas with us.
### References
[Inequality of arithmetic and geometric means. URL https://artofproblemsolving.com/wiki/](https://artofproblemsolving.com/wiki/index.php/AM-GM_Inequality)
[index.php/AM-GM_Inequality.](https://artofproblemsolving.com/wiki/index.php/AM-GM_Inequality)
[Pythagorean theorem. URL https://en.wikipedia.org/wiki/Pythagorean_theorem.](https://en.wikipedia.org/wiki/Pythagorean_theorem)
Shoelace theorem. [URL https://artofproblemsolving.com/wiki/index.php/Shoelace_](https://artofproblemsolving.com/wiki/index.php/Shoelace_Theorem)
[Theorem.](https://artofproblemsolving.com/wiki/index.php/Shoelace_Theorem)
Marah Abdin, Sam Ade Jacobs, Ammar Ahmad Awan, Jyoti Aneja, Ahmed Awadallah, Hany
Awadalla, Nguyen Bach, Amit Bahree, Arash Bakhtiari, Harkirat Behl, et al. Phi-3 technical report:
A highly capable language model locally on your phone. arXiv preprint arXiv:2404.14219, 2024.
AI-MO. Aime 2024, 2024a. URL [https://huggingface.co/datasets/AI-MO/](https://huggingface.co/datasets/AI-MO/aimo-validation-aime)
[aimo-validation-aime.](https://huggingface.co/datasets/AI-MO/aimo-validation-aime)
AI-MO. Amc 2023, 2024b. URL [https://huggingface.co/datasets/AI-MO/](https://huggingface.co/datasets/AI-MO/aimo-validation-amc)
[aimo-validation-amc.](https://huggingface.co/datasets/AI-MO/aimo-validation-amc)
Bradley Brown, Jordan Juravsky, Ryan Ehrlich, Ronald Clark, Quoc V Le, Christopher Ré, and
Azalia Mirhoseini. Large language monkeys: Scaling inference compute with repeated sampling.
_arXiv preprint arXiv:2407.21787, 2024._
Guoxin Chen, Minpeng Liao, Chengxi Li, and Kai Fan. Alphamath almost zero: process supervision
without process, 2024.
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser,
Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve
math word problems. arXiv preprint arXiv:2110.14168, 2021.
-----
Kahneman Daniel. Thinking, fast and slow. Macmillan, 2011.
Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha
Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. The llama 3 herd of models.
_arXiv preprint arXiv:2407.21783, 2024._
Zhibin Gou, Zhihong Shao, Yeyun Gong, Yujiu Yang, Minlie Huang, Nan Duan, Weizhu Chen,
et al. Tora: A tool-integrated reasoning agent for mathematical problem solving. arXiv preprint
_arXiv:2309.17452, 2023._
Shibo Hao, Yi Gu, Haodi Ma, Joshua Jiahua Hong, Zhen Wang, Daisy Zhe Wang, and Zhiting Hu.
Reasoning with language model is planning with world model. arXiv preprint arXiv:2305.14992,
2023.
Chaoqun He, Renjie Luo, Yuzhuo Bai, Shengding Hu, Zhen Leng Thai, Junhao Shen, Jinyi Hu,
Xu Han, Yujie Huang, Yuxiang Zhang, et al. Olympiadbench: A challenging benchmark for
promoting agi with olympiad-level bilingual multimodal scientific problems. arXiv preprint
_arXiv:2402.14008, 2024._
Jie Huang, Xinyun Chen, Swaroop Mishra, Huaixiu Steven Zheng, Adams Wei Yu, Xinying Song,
and Denny Zhou. Large language models cannot self-correct reasoning yet. arXiv preprint
_arXiv:2310.01798, 2023._
Zhen Huang, Haoyang Zou, Xuefeng Li, Yixiu Liu, Yuxiang Zheng, Ethan Chern, Shijie Xia,
Yiwei Qin, Weizhe Yuan, and Pengfei Liu. O1 replication journey – part 2: Surpassing o1[preview through simple distillation big progress or bitter lesson? Github, 2024. URL https:](https://github.com/GAIR-NLP/O1-Journey)
[//github.com/GAIR-NLP/O1-Journey.](https://github.com/GAIR-NLP/O1-Journey)
Lewis Tunstall Ben Lipkin Roman Soletskyi Shengyi Costa Huang Kashif Rasul
Longhui Yu Albert Jiang Ziju Shen Zihan Qin Bin Dong Li Zhou Yann Fleureau
Guillaume Lample Jia LI, Edward Beeching and Stanislas Polu. Numinamath.
[https://github.com/project-numina/aimo-progress-prize](https://github.com/
project-numina/aimo-progress-prize/blob/main/report/numina_dataset.pdf), 2024a.
Lewis Tunstall Ben Lipkin Roman Soletskyi Shengyi Costa Huang Kashif Rasul Longhui Yu Albert Jiang Ziju Shen Zihan Qin Bin Dong Li Zhou Yann Fleureau Guillaume Lample Jia LI,
[Edward Beeching and Stanislas Polu. Numinamath cot, 2024b. URL https://huggingface.co/](https://huggingface.co/datasets/AI-MO/NuminaMath-CoT)
[datasets/AI-MO/NuminaMath-CoT.](https://huggingface.co/datasets/AI-MO/NuminaMath-CoT)
Jikun Kang, Xin Zhe Li, Xi Chen, Amirreza Kazemi, and Boxing Chen. Mindstar: Enhancing math
reasoning in pre-trained llms at inference time. arXiv preprint arXiv:2405.16265, 2024.
Levente Kocsis and Csaba Szepesvári. Bandit based monte-carlo planning. volume 2006, pages
282–293, 09 2006. ISBN 978-3-540-45375-8. doi: 10.1007/11871842_29.
Aviral Kumar, Vincent Zhuang, Rishabh Agarwal, Yi Su, John D Co-Reyes, Avi Singh, Kate Baumli,
Shariq Iqbal, Colton Bishop, Rebecca Roelofs, et al. Training language models to self-correct via
reinforcement learning. arXiv preprint arXiv:2409.12917, 2024.
Tamera Lanham, Anna Chen, Ansh Radhakrishnan, Benoit Steiner, Carson Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hubinger, Jackson Kernion, et al. Measuring faithfulness in
chain-of-thought reasoning. arXiv preprint arXiv:2307.13702, 2023.
Chen Li, Weiqi Wang, Jingcheng Hu, Yixuan Wei, Nanning Zheng, Han Hu, Zheng Zhang, and
Houwen Peng. Common 7b language models already possess strong math capabilities. arXiv
_preprint arXiv:2403.04706, 2024._
Minpeng Liao, Wei Luo, Chengxi Li, Jing Wu, and Kai Fan. Mario: Math reasoning with code
interpreter output–a reproducible pipeline. arXiv preprint arXiv:2401.08190, 2024.
Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan
Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. Let’s verify step by step. arXiv preprint
_arXiv:2305.20050, 2023._
-----
Hunter Lightman, Vineet Kosaraju, Yuri Burda, Harrison Edwards, Bowen Baker, Teddy Lee, Jan
Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. Let’s verify step by step. In The Twelfth
_[International Conference on Learning Representations, 2024. URL https://openreview.net/](https://openreview.net/forum?id=v8L0pN6EOi)_
[forum?id=v8L0pN6EOi.](https://openreview.net/forum?id=v8L0pN6EOi)
Aixin Liu, Bei Feng, Bing Xue, Bingxuan Wang, Bochao Wu, Chengda Lu, Chenggang Zhao,
Chengqi Deng, Chenyu Zhang, Chong Ruan, et al. Deepseek-v3 technical report. arXiv preprint
_arXiv:2412.19437, 2024._
Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng,
Qingwei Lin, Shifeng Chen, and Dongmei Zhang. Wizardmath: Empowering mathematical
reasoning for large language models via reinforced evol-instruct. arXiv preprint arXiv:2308.09583,
2023.
Liangchen Luo, Yinxiao Liu, Rosanne Liu, Samrat Phatale, Harsh Lara, Yunxuan Li, Lei Shu, Yun
Zhu, Lei Meng, Jiao Sun, et al. Improve mathematical reasoning in language models by automated
process supervision. arXiv preprint arXiv:2406.06592, 2024.
Microsoft. Phi-3-mini-4k-instruct, 2024. [URL https://huggingface.co/microsoft/](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct)
[Phi-3-mini-4k-instruct.](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct)
Ilge Akkaya Noam Brown and Hunter Lightman. Openai’s noam brown, ilge akkaya and hunter
[lightman on o1 and teaching llms to reason better, 2024. URL https://www.youtube.com/](https://www.youtube.com/watch?v=jPluSXJpdrA)
[watch?v=jPluSXJpdrA.](https://www.youtube.com/watch?v=jPluSXJpdrA)
OpenAI. Gpt-4 technical report. 2023.
OpenAI. Openai o1 system card. preprint, 2024.
Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong
Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow
instructions with human feedback. Advances in Neural Information Processing Systems, 35:
27730–27744, 2022.
Zhenting Qi, Mingyuan Ma, Jiahang Xu, Li Lyna Zhang, Fan Yang, and Mao Yang. Mutual reasoning
makes smaller llms stronger problem-solvers. arXiv preprint arXiv:2408.06195, 2024.
[Qwen. Qwen2-math-7b, 2024a. URL https://huggingface.co/Qwen/Qwen2-Math-7B.](https://huggingface.co/Qwen/Qwen2-Math-7B)
[Qwen. Qwen2.5-math-1.5b, 2024b. URL https://huggingface.co/Qwen/Qwen2.5-Math-1.5B.](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B)
[Qwen. Qwen2.5-math-7b, 2024c. URL https://huggingface.co/Qwen/Qwen2.5-Math-7B.](https://huggingface.co/Qwen/Qwen2.5-Math-7B)
Matthew Renze and Erhan Guven. Self-reflection in llm agents: Effects on problem-solving performance. arXiv preprint arXiv:2405.06682, 2024.
Noah Shinn, Federico Cassano, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao. Reflexion:
Language agents with verbal reinforcement learning. Advances in Neural Information Processing
_Systems, 36, 2024._
David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez,
Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, et al. Mastering chess and shogi
by self-play with a general reinforcement learning algorithm. arXiv preprint arXiv:1712.01815,
2017.
Charlie Snell, Jaehoon Lee, Kelvin Xu, and Aviral Kumar. Scaling llm test-time compute optimally
can be more effective than scaling model parameters. arXiv preprint arXiv:2408.03314, 2024.
Zhengyang Tang, Xingxing Zhang, Benyou Wan, and Furu Wei. Mathscale: Scaling instruction
tuning for mathematical reasoning. arXiv preprint arXiv:2403.02884, 2024.
Qwen Team. Qwq: Reflect deeply on the boundaries of the unknown, November 2024a. URL
[https://qwenlm.github.io/blog/qwq-32b-preview/.](https://qwenlm.github.io/blog/qwq-32b-preview/)
-----
[The Mistral AI Team. Mathstral-7b-v0.1, 2024b. URL https://huggingface.co/mistralai/](https://huggingface.co/mistralai/Mathstral-7B-v0.1)
[Mathstral-7B-v0.1.](https://huggingface.co/mistralai/Mathstral-7B-v0.1)
Shubham Toshniwal, Wei Du, Ivan Moshkov, Branislav Kisacanin, Alexan Ayrapetyan, and Igor
Gitman. Openmathinstruct-2: Accelerating ai for math with massive open-source instruction data.
_arXiv preprint arXiv:2410.01560, 2024._
Karthik Valmeekam, Sarath Sreedharan, Matthew Marquez, Alberto Olmo, and Subbarao Kambhampati. On the planning abilities of large language models (a critical investigation with a proposed
benchmark). arXiv preprint arXiv:2302.06706, 2023.
Chaojie Wang, Yanchen Deng, Zhiyi Lv, Shuicheng Yan, and An Bo. Q*: Improving multi-step
reasoning for llms with deliberative planning, 2024a.
Ke Wang, Houxing Ren, Aojun Zhou, Zimu Lu, Sichun Luo, Weikang Shi, Renrui Zhang, Linqi Song,
Mingjie Zhan, and Hongsheng Li. Mathcoder: Seamless code integration in LLMs for enhanced
mathematical reasoning. In The Twelfth International Conference on Learning Representations,
[2024b. URL https://openreview.net/forum?id=z8TW0ttBPp.](https://openreview.net/forum?id=z8TW0ttBPp)
Peiyi Wang, Lei Li, Zhihong Shao, R. X. Xu, Damai Dai, Yifei Li, Deli Chen, Y. Wu, and Zhifang
Sui. Math-shepherd: Verify and reinforce llms step-by-step without human annotations, 2024c.
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V Le, Ed H. Chi, Sharan Narang, Aakanksha
Chowdhery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language
models. In The Eleventh International Conference on Learning Representations, 2023. URL
[https://openreview.net/forum?id=1PL1NIMMrw.](https://openreview.net/forum?id=1PL1NIMMrw)
Eric W. Weisstein. Fermat’s little theorem, a. [URL https://mathworld.wolfram.com/](https://mathworld.wolfram.com/FermatsLittleTheorem.html)
[FermatsLittleTheorem.html.](https://mathworld.wolfram.com/FermatsLittleTheorem.html)
[Eric W. Weisstein. Vieta’s formulas, from mathworld—a wolfram web resource, b. URL http:](http://mathworld.wolfram.com/Tree.html)
[//mathworld.wolfram.com/Tree.html.](http://mathworld.wolfram.com/Tree.html)
Yangzhen Wu, Zhiqing Sun, Shanda Li, Sean Welleck, and Yiming Yang. An empirical analysis of compute-optimal inference for problem-solving with language models. arXiv preprint
_arXiv:2408.00724, 2024._
Huajian Xin, Daya Guo, Zhihong Shao, Zhizhou Ren, Qihao Zhu, Bo Liu, Chong Ruan, Wenda Li,
and Xiaodan Liang. Deepseek-prover: Advancing theorem proving in llms through large-scale
synthetic data. arXiv preprint arXiv:2405.14333, 2024.
An Yang, Beichen Zhang, Binyuan Hui, Bofei Gao, Bowen Yu, Chengpeng Li, Dayiheng Liu, Jianhong Tu, Jingren Zhou, Junyang Lin, et al. Qwen2. 5-math technical report: Toward mathematical
expert model via self-improvement. arXiv preprint arXiv:2409.12122, 2024.
Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Tom Griffiths, Yuan Cao, and Karthik Narasimhan.
Tree of thoughts: Deliberate problem solving with large language models. Advances in Neural
_Information Processing Systems, 36, 2024._
Fei Yu, Anningzhe Gao, and Benyou Wang. Outcome-supervised verifiers for planning in mathematical reasoning. arXiv preprint arXiv:2311.09724, 2023a.
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T Kwok, Zhenguo
Li, Adrian Weller, and Weiyang Liu. Metamath: Bootstrap your own mathematical questions for
large language models. arXiv preprint arXiv:2309.12284, 2023b.
Zheng Yuan, Hongyi Yuan, Chengpeng Li, Guanting Dong, Keming Lu, Chuanqi Tan, Chang Zhou,
and Jingren Zhou. Scaling relationship on learning mathematical reasoning with large language
models. arXiv preprint arXiv:2308.01825, 2023.
Dan Zhang, Sining Zhoubian, Ziniu Hu, Yisong Yue, Yuxiao Dong, and Jie Tang. Rest-mcts*: Llm
self-training via process reward guided tree search. arXiv preprint arXiv:2406.03816, 2024a.
-----
Di Zhang, Jiatong Li, Xiaoshui Huang, Dongzhan Zhou, Yuqiang Li, and Wanli Ouyang. Accessing
gpt-4 level mathematical olympiad solutions via monte carlo tree self-refine with llama-3 8b. arXiv
_preprint arXiv:2406.07394, 2024b._
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang,
Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica.
Judging LLM-as-a-judge with MT-bench and chatbot arena. In Thirty-seventh Conference on
_Neural Information Processing Systems Datasets and Benchmarks Track, 2023._
-----
### A Appendix
**A.1** **Additional Experiments and Details**
**Data Generation Details. As detailed in Sec. 3.4, each round starts by self-generating step-by-step**
verified trajectories for 747k math word problems. The maximum tree depth d is set to 16, with 16
MCTS rollouts conducted per problem by default. At each step, we allow to explore 8 candidate
nodes, and the constant c in Eq. 1 is set to 2 to promote greater exploration. In the bootstrap round,
due to the large size of the initial policy model (236B), we used smaller parameters: 8 rollouts and
5 candidate nodes per step. To improve the accuracy of solving challenging problems in round 4,
we increase the number of candidate nodes to 16 and conduct 2 MCTS tree expansions per problem
using different random seeds. Detailed prompts are available in Appendix A.3.
**Training Details. In each round, we collect step-by-step verified trajectories to fine-tune the policy**
LLM and train the PPM. To reduce noise in synthetic math problems (e.g., incorrect ground-truth
answers labeled by GPT-4), we remove synthetic problems with trajectories achieving less than 50%
accuracy. Based on our extensive experiments, the policy LLM is fine-tuned from the initial base
model in each round, rather than training incrementally on the model from the previous round. All
policy SLMs are trained for 2 epochs with a sequence length of 4096 tokens and a batch size of 128.
We use AdamW optimizer with a linear learning rate scheduler, setting the initial learning rate to 7e-6
for Qwen models, and a cosine scheduler with an initial learning rate of 5e-6 for Phi3-mini-Instruct.
The PPM is trained for 1 epoch with a batch size of 512 and an initial learning rate of 7e-6.
**Training the ORM and PQM. The Outcome Reward Model (ORM) and the Q-value-based Process**
Reward Model (PQM) share the same model architecture and training parameters with our PPM. To
train the ORM, we collect trajectories from math problems containing both correct and incorrect
solutions. Specifically, the two trajectories with the highest average Q-values are selected as positive
examples, while the two with the lowest are chosen as negative examples. Following Qwen2.5Math [Yang et al., 2024], we adopt the pairwise ranking loss [Ouyang et al., 2022] to optimize the
ORM. To train the PQM, we follow Chen et al. [2024] to use step-level Q-values as reward labels. Let
**x = x ⊕** _s1 ⊕_ _s2 ⊕_ _... ⊕_ _sd be the trajectory, with annotated Q-values Q = (Q(s1), Q(s1), ..., Q(sd))_
and predicted Q-values Q[′] = (Q[′](s1), Q[′](s1), ..., Q[′](sd)) for each step. To stabilize PQM training,
we treat each trajectory as a single training sample and predict Q-values for all steps simultaneously,
rather than splitting it into individual per-step samples. Specifically, to predict the Q-value Q[′](si) for
step si, PQM takes the trajectory from the question up to step si (i.e., x ⊕ _s1 ⊕_ _s2 ⊕_ _... ⊕_ _si) as input_
and outputs a value between -1 and 1. We use a mean squared error (MSE) loss for PQM training:
_Lprm(x) = ∥Q −_ **Q[′]∥[2]** (6)
**Self-evolution Inference Costs. In the initial bootstrap round, we use DeepSeek-Coder-v2-Instruct**
(236B) as the policy model, using 10 nodes of 8×80GB H100 GPUs with 8 MCTS rollouts. This
required approximately two weeks to finish the data generation. For rounds 2–4, using our fine-tuned
7B SLM as the policy model, data generation was performed on 15 nodes of 4×40GB A100 GPUs,
with each round completed in three days. In the final round, to include more challenging problems,
we increased the number of MCTS rollouts to 64, extending the data generation time to one week.
Table 9: Inference costs of rStar-Math. We show the average number of generated tokens required to
generate a trajectory for a given question.
MATH AIME 2024 AMC 2023 Olympiad Bench College Math GSM8K GaokaoEn 2023
5453 15693 14544 7889 4503 3299 6375
**Inference Setting. In our evaluation, we run multiple MCTS to generate candidate solution trajecto-**
ries. For each problem, we generate 32 candidate nodes at each step and use the PPM to score each
node. Since the PPM effectively provides step-level quality evaluations, we limit MCTS to just 4
rollouts per step to update the Q-values. After completing MCTS, the trajectory with the highest
PPM score is selected as the final answer. Table 9 presents the average number of tokens generated to
produce a trajectory in MCTS.
**Pass@N. Table 10 compares the math reasoning performance of our policy models with the instruct**
versions developed by the original model team. Our policy models do not consistently outperform the
-----
Table 10: Pass@1 (greedy) accuracy of our fine-tuned policy models for Phi3-mini, Qwen2.5-Math1.5B, Qwen2-Math-7B and Qwen2.5-Math-7B.
Model MATH AIME 2024 AMC 2023 Olympiad Bench College Math GSM8K GaokaoEn 2023
_General Base Model: Phi3-mini-Instruct (3.8B)_
Phi3-mini-Instruct 41.4 3.33 7.5 12.3 33.1 85.7 37.1
**Our policy model** **68.0** **10.0** **37.5** **36.6** **48.7** **87.9** **53.2**
_Math-Specialized Base Model: Qwen2.5-Math-1.5B_
Qwen2.5-Math-1.5B 51.2 0.0 22.5 16.7 38.4 74.6 46.5
Qwen2.5-Math-1.5B-Instruct 60.0 10.0 **60.0** 38.1 47.7 **84.8** 65.5
**Our policy model** **74.8** **13.3** 47.5 **42.5** **50.1** 83.1 **58.7**
_Math-Specialized Base Model: Qwen2-Math-7B_
Qwen2-Math-7B 53.4 3.3 25.0 17.3 39.4 80.4 47.3
Qwen2-Math-7B-Instruct 73.2 13.3 **62.5** 38.2 45.9 **89.9** 62.1
**Our policy model** **73.8** **16.7** 45.0 **43.9** **52.0** 88.3 **65.2**
_Math-Specialized Base Model: Qwen2.5-Math-7B_
Qwen2.5-Math-7B 58.8 0.0 22.5 21.8 41.6 91.6 51.7
Qwen2.5-Math-7B-Instruct **82.6** 6.0 **62.5** 41.6 46.8 **95.2** **66.8**
**Our policy model** 78.4 **26.7** 47.5 **47.1** **52.5** 89.7 65.7
Figure 6: Pass@N accuracy with random sampling from different policy models. Compared to the
official Qwen instruct version, our policy model exhibits a stronger ability to sample correct solutions.
Figure 7: Pass@N accuracy with PPM-augmented MCTS. Under the same PPM guidance, the four
policy models of varying sizes demonstrate convergent capabilities in sampling correct solutions.
-----
instruct versions. For example, on the Qwen2.5-Math-7B base model, Qwen2.5-Math-7B-Instruct
achieves 4.2% higher accuracy on the MATH benchmark. However, in System 2 deep thinking
paradigm, the pass@1 accuracy alone does not fully reflect the reasoning capabilities for the policy
model. To provide a more comprehensive evaluation, Fig.6 and Fig.7 present the pass@N accuracy.
In this metric, the policy model generates N solutions under two settings: random sampling (Fig.6)
and PPM-augmented MCTS deep thinking (Fig.7). A problem is considered solved if one solution
is correct. As shown in Figure 6, our policy model achieves comparable Pass@64 accuracy with
Qwen2.5-Math-7B-Instruct on MATH and significantly outperforms it on other benchmarks. Figure 7
further compares the pass@N accuracy of our four policy models (different sizes) after MCTS deep
thinking under the same PPM guidance. We can observe that after generating 64 trajectories, the
pass@N accuracy of different policy models becomes comparable.
-----
**A.2** **Examples of Reasoning Trajectories**
-----
-----
-----
-----
**A.3** **Prompt Examples**
-----
-----
| [
"Xinyu, Guan",
"Yi, Zhu",
"Yifei, Liu",
"Fan, Yang",
"Li Lyna, Zhang",
"Youran, Sun",
"Mao, Yang",
"Ning, Shang"
] | 2025-01-08T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2501.04519 | https://arxiv.org/abs/2501.04519 | https://www.semanticscholar.org/paper/6b9cdba25a6930d82958d2404350f12c1550ddc4 |