Dataset Viewer
Auto-converted to Parquet
context
stringlengths
103
1.92k
A
stringlengths
100
2.77k
B
stringlengths
108
2.16k
C
stringlengths
103
2.23k
D
stringlengths
105
2.23k
label
stringclasses
4 values
Our exercise directly extends the exercise of [Athey et al., 2020a]. Whereas those authors study the long term effect of a binary action (“small” versus “large” classes), we study the long term effect of a continuous action (various class sizes). Figure 1 shows that the randomized action D𝐷Ditalic_D takes values in 𝒟=[12,28]𝒟1228\mathcal{D}=[12,28]caligraphic_D = [ 12 , 28 ]. It also shows that the supports of the “small” and “large” classes are overlapping; some “small” classes were larger than some “large” classes. By modeling D𝐷Ditalic_D as continuous with a Gaussian kernel k𝒟subscript𝑘𝒟k_{\mathcal{D}}italic_k start_POSTSUBSCRIPT caligraphic_D end_POSTSUBSCRIPT, we smoothly share information across different class sizes.
The oracle, visualized in red, is estimated from long term experimental data, i.e. joint observations of the randomized action D𝐷Ditalic_D and long term reward Y𝑌Yitalic_Y in Project STAR. Our goal is to recover similar estimates without access to long term experimental data. Figure 4 shows that the oracle curve is typically decreasing: larger class sizes appear to cause lower test scores, across horizons. In particular, the oracle estimates are nonlinearly decreasing, from positive counterfactual test scores (above average) to negative counterfactual test scores (below average). As the long term horizon increases, i.e. as the definition of Y𝑌Yitalic_Y corresponds to later grades, the oracle curves flatten: the effect of kindergarten class size on test scores appears to attenuate over time.
As in previous work, we consider the third grade test score to be the short term reward S𝑆Sitalic_S, and a subsequent test score to be the long term reward Y𝑌Yitalic_Y. By choosing different grades as different long term rewards, we evaluate how our methods perform over different horizons. Our variable definitions are identical to [Athey et al., 2020a], except that we use a continuous action.
To demonstrate that our proposed kernel methods are practical for empirical research, we evaluate their ability to recover long term dose response curves. Using short term experimental data and long term observational data, our methods measure similar long term effects as an oracle method that has access to long term experimental data. Our methods outperform some benchmarks from previous work that use only long term observational data.
The difficulty in estimating long term dose response curves is the complex nonlinearity and heterogeneity in the link between short term response curve 𝔼⁢{S(d)}𝔼superscript𝑆𝑑\mathbb{E}\{S^{(d)}\}blackboard_E { italic_S start_POSTSUPERSCRIPT ( italic_d ) end_POSTSUPERSCRIPT } and long term response curve 𝔼⁢{Y(d)}𝔼superscript𝑌𝑑\mathbb{E}\{Y^{(d)}\}blackboard_E { italic_Y start_POSTSUPERSCRIPT ( italic_d ) end_POSTSUPERSCRIPT }. For example, we would like to allow for the link between counterfactual test scores and counterfactual earnings to be nonlinear, and for students with different baseline characteristics to have different links. The identification of long term dose response curves is well known [Rosenman et al., 2018, Athey et al., 2020b, Athey et al., 2020a, Rosenman et al., 2020, Kallus and Mao, 2020], however it appears that no nonlinear, nonparametric estimators have been previously proposed for the response curve, highlighting the difficulty of extrapolating the long term effects of continuous actions.
B
If nA>nBsubscript𝑛𝐴subscript𝑛𝐵n_{A}>n_{B}italic_n start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT > italic_n start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT, then the wage ratio is more in favor of group A𝐴Aitalic_A in any period t≥T𝑡𝑇t\geq Titalic_t ≥ italic_T when compared to the wage ratio in any period t′<0.superscript𝑡′0t^{\prime}<0.italic_t start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT < 0 . If nB>nAsubscript𝑛𝐵subscript𝑛𝐴n_{B}>n_{A}italic_n start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT > italic_n start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT, then the wage ratio is more in favor of group B𝐵Bitalic_B in any period t≥T𝑡𝑇t\geq Titalic_t ≥ italic_T when compared to the wage ratio in any period t′<0.superscript𝑡′0t^{\prime}<0.italic_t start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT < 0 . If nA=nBsubscript𝑛𝐴subscript𝑛𝐵n_{A}=n_{B}italic_n start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT = italic_n start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT then the wage ratio in period t𝑡titalic_t converges to the wage ratio in any period t′<0superscript𝑡′0t^{\prime}<0italic_t start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT < 0 as t→∞→𝑡t\to\inftyitalic_t → ∞.
Third, the long-run wage ratio moves in favor of the group toward which more firms segregate in equilibrium. For any t𝑡titalic_t, we define the wage ratio (or log wage gap) at time t𝑡titalic_t to be the average wage of A−limit-from𝐴A-italic_A -group workers over the average wage of B−limit-from𝐵B-italic_B -group workers who are employed at the end of time t𝑡titalic_t. In the pre-EPSW periods, in equilibrium, every worker is hired by the first firm she bargains with, and therefore both groups’ wages follow (4) with m=n𝑚𝑛m=nitalic_m = italic_n and Wv=0superscript𝑊𝑣0W^{v}=0italic_W start_POSTSUPERSCRIPT italic_v end_POSTSUPERSCRIPT = 0 for all v∈[0,1]𝑣01v\in[0,1]italic_v ∈ [ 0 , 1 ], which implies:
We say that the wage ratio is more in favor of group A𝐴Aitalic_A (B𝐵Bitalic_B) in period t𝑡titalic_t compared to period t′superscript𝑡′t^{\prime}italic_t start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT if the wage ratio is higher (lower) in period t𝑡titalic_t than in period t′superscript𝑡′t^{\prime}italic_t start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT. Part 2 of Proposition 4 shows that the the key determinant of the wage ratio in the long run–that is, the wage ratio between any pre-EPSW period and any sufficiently large period since the introduction of EPSW–is the relationship between nAsubscript𝑛𝐴n_{A}italic_n start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT and nBsubscript𝑛𝐵n_{B}italic_n start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT: if nA>nBsubscript𝑛𝐴subscript𝑛𝐵n_{A}>n_{B}italic_n start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT > italic_n start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT then the wage ratio is more in favor of group A𝐴Aitalic_A in the long run compared to pre-EPSW, and vice versa, and if nA=nBsubscript𝑛𝐴subscript𝑛𝐵n_{A}=n_{B}italic_n start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT = italic_n start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT, then EPSW has no impact on the long-run wage ratio.
As demonstrated by Part 2 of Proposition 3, EPSW moves the wage gap in favor of the majority group (and does so strictly except for one core outcome among a continuum). Moreover, Part 3 shows that larger wage gaps are associated with higher firm profits under EPSW. An implication of Part 3 is that firms prefer core outcomes that result in larger wage gaps, suggesting that a core outcome with a larger wage gap may be more likely to occur if firms can coordinate to select an outcome from the core. Part 4
Part 2 of Proposition 4 reveals that the number of firms segregating for each group is the key determinant of EPSW’s effect on the long-run wage ratio.
D
Bargaining solutions are established ways to select among candidate agreements on how to share surplus. Different bargaining solutions, proposed over the years by mathematicians and economists, aim to satisfy certain desiderata like fairness, Pareto optimality, and utility-maximization. Typically, solving for bargaining solutions consists in defining some measure of joint utility between players (e.g. take the sum, product, or minimum of the players’ utilities). The feasible, Pareto-optimal solution that maximizes this joint utility is known as a bargaining solution.
Bargaining solutions are established ways to select among candidate agreements on how to share surplus. Different bargaining solutions, proposed over the years by mathematicians and economists, aim to satisfy certain desiderata like fairness, Pareto optimality, and utility-maximization. Typically, solving for bargaining solutions consists in defining some measure of joint utility between players (e.g. take the sum, product, or minimum of the players’ utilities). The feasible, Pareto-optimal solution that maximizes this joint utility is known as a bargaining solution.
If neither player dominates in a bargain, how do they decide how to share surplus profit? Solutions to bargaining problems identify an agreement that maximizes some joint utility function or satisfies certain desirable properties. In this section, we define the various bargaining solutions that the two players could plausibly arrive at within the set of Pareto-optimal solutions. These solutions mostly use a joint utility function to guide the bargaining agreement, as depicted in Figure 4. A visual representation of the bargaining solutions is provided in Figure 5. Definitions and closed-form solutions are provided below, and the corresponding proofs can be found in Appendix 8.
While the procedure described above uses a numerical approach, in the remainder of this section, we use the specific case of quadratic cost functions to demonstrate that closed-form solutions are indeed attainable for player strategies and bargaining solutions.
Bargaining solutions are normative: they provide guidelines for how surplus payoffs should be distributed. Solutions are inspired by moral theories like utilitarianism (which aims to maximize the sum of utilities) and egalitarianism (which aims to maximize the worst-off agent). We demonstrate the use of bargaining solutions in the subsequent sections.
D
The commissioning of such projects by decision makers indicates their readiness for design or reform, often driven by urgent needs or crises, which makes them particularly receptive to proposed changes.
posing a significant challenge to my policy ambitions. Moreover, I believed these experiences were less informative for situations where decision-makers had yet to acknowledge the
and expert input was actively sought—often in response to an urgent crisis. As an aspiring design economist, I realized that such opportunities might not come my way for a long time, if ever,
However, to my surprise, I soon realized that the choice rules induced by the Army’s dual-criteria priority structure did not satisfy the substitutability condition.
Recognizing that similar opportunities were unlikely to come my way soon, I realized that these experiences—shaped by more receptive decision makers—might not be as informative for design economists like myself,
D
Note that Problem (4) is separable in Prsubscript𝑃𝑟P_{r}italic_P start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT and {Pc,Wc}subscript𝑃𝑐subscript𝑊𝑐\{P_{c},W_{c}\}{ italic_P start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT , italic_W start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT }:
We discuss the results in terms of input factor quantities (Fig. 5,Fig. 10 and Fig. 15), product quantities produced (Fig. 6, Fig. 11 and Fig. 16), quality perceived (Fig. 7, Fig. 12 and Fig. 17), commodity prices (Fig. 8, Fig. 12 and Fig. 17) and profit (Fig. 9, Fig. 14 and Fig. 19).
The effect of an α𝛼\alphaitalic_α increase is, as expected from (5), an increase in the quality of the sensing part θ𝜃\thetaitalic_θ of the ISAC service (Fig. 17). This drives an increase on the commodity/input factor Prsubscript𝑃𝑟P_{r}italic_P start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT used for the service (Fig. 15 and Fig. 16) and an increase in the unit price p1subscript𝑝1p_{1}italic_p start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ((Fig. 18)). And this drives operator profit upwards (Fig. 19). Note that the increase in Prsubscript𝑃𝑟P_{r}italic_P start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT and θ𝜃\thetaitalic_θ takes place only for the range of low values of α𝛼\alphaitalic_α, while the increase in p2subscript𝑝2p_{2}italic_p start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT and profit sustains itself for all values of α𝛼\alphaitalic_α.
The main effect of a wpsubscript𝑤𝑝w_{p}italic_w start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT increase is, as expected from economic theory, a reduction in the demand of both input factors Prsubscript𝑃𝑟P_{r}italic_P start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT and Pcsubscript𝑃𝑐P_{c}italic_P start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT, the latter being more pronounced (Fig. 5); this observation may provide an effective way for the system designer to limit the power consumption, as discussed below in Section 5. The reduction in Pcsubscript𝑃𝑐P_{c}italic_P start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT causes a drop in the commodity Rcsubscript𝑅𝑐R_{c}italic_R start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT supplied (Fig. 6), and ultimately in the quality of the communication side of the ISAC service η𝜂\etaitalic_η (Fig. 7). The reduction in the supply of the commodity Rcsubscript𝑅𝑐R_{c}italic_R start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT is accompanied with a rise in the price p2subscript𝑝2p_{2}italic_p start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT (Fig. 8), although the operator profit decreases (Fig. 9).
The main effect of a wwsubscript𝑤𝑤w_{w}italic_w start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT increase is, again, a reduction in the demand of input factor Wcsubscript𝑊𝑐W_{c}italic_W start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT (Fig. 10); Again, this observation may provide an effective way for the system designer to limit the bandwidth consumption, as it is discussed below in Section 5. The reduction in Wcsubscript𝑊𝑐W_{c}italic_W start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT causes a drop in the commodity Rcsubscript𝑅𝑐R_{c}italic_R start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT supplied (Fig. 11), and ultimately in the quality of the communication side of the ISAC service η𝜂\etaitalic_η (Fig. 12). The reduction in the supply of the commodity Rcsubscript𝑅𝑐R_{c}italic_R start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT is accompanied with a rise in the price p2subscript𝑝2p_{2}italic_p start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT (Fig. 13), although again the operator profit decreases (Fig. 14).
A
Here, δnD,ssubscriptsuperscript𝛿𝐷s𝑛\delta^{D,\text{s}}_{n}italic_δ start_POSTSUPERSCRIPT italic_D , s end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ρnD,ssubscriptsuperscript𝜌𝐷s𝑛\rho^{D,\text{s}}_{n}italic_ρ start_POSTSUPERSCRIPT italic_D , s end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT are regression coefficients for price response, and δnP,ssubscriptsuperscript𝛿𝑃s𝑛\delta^{P,\text{s}}_{n}italic_δ start_POSTSUPERSCRIPT italic_P , s end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ρnP,ssubscriptsuperscript𝜌𝑃s𝑛\rho^{P,\text{s}}_{n}italic_ρ start_POSTSUPERSCRIPT italic_P , s end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT are for peak price response. Cryptocurrency mining facilities might use different predictors, γnsubscript𝛾𝑛\gamma_{n}italic_γ start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT, which are combinations of historical ERCOT system-wide electricity demand based on their risk appetite during 4CP hours (4PM - 6PM), also identified through 𝕀p⁢(t)superscript𝕀𝑝𝑡\mathbb{I}^{p}(t)blackboard_I start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT ( italic_t ). Finally, the ARMA process models the variance unexplained by the regression model. Here, N𝑁Nitalic_N is the inverse transformation used to revert the transformed cryptocurrency mining firms’ electricity consumption data. Note that all other variables (Ttsubscript𝑇𝑡T_{t}italic_T start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT, πtDsubscriptsuperscript𝜋𝐷𝑡\pi^{D}_{t}italic_π start_POSTSUPERSCRIPT italic_D end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT, πtRsubscriptsuperscript𝜋𝑅𝑡\pi^{R}_{t}italic_π start_POSTSUPERSCRIPT italic_R end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT, Ltsubscript𝐿𝑡L_{t}italic_L start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT) used in this model are also transformed and need to be considered appropriately.
In this article, contrary to building the model in a single step, we perform multiple linear regressions to systematically extract the influence of regressors and perform regression based on the residuals from the previous step.
The correlation analysis indicates that factors such as electricity market prices, average temperatures across Texas, and ERCOT-wide electricity demand influence the electricity consumption of cryptocurrency mining firms in a complex manner. We observe that these factors can affect each other, necessitating a focus on specific time slots to capture the underlying physics-based relationships. The objective of this section is to perform multivariable linear regression to develop mathematical models describing the electricity consumption of aggregated cryptocurrency mining facilities. We hypothesize the models to be as follows:
The empirical equation representing cryptocurrency miners’ demand response during the summer months is given in (9). The residuals suffer from similar issues as was discussed in the earlier model (with RMSE and MAPE of 83.14 and 90.96% with the correlation-only model to RMSE and MAPE of 60.86 and 64.24% with the incorporation of autocorrelation); however, the efficacy of the model is further evidenced through the increased coefficient of determination of 0.93 to 0.99, implying that the heuristic-based correlation model itself can explain a significant portion of cryptocurrency miners’ behavior, and the model get strengthened with inclusion of ARIMA model.
To compute the overall accuracy of the model, we need to compare how much of the variability is explained using correlation analysis alone versus the additional use of an autoregressive model. The mean squared error (MSE) and mean absolute percentage error (MAPE) of the correlation analysis-only model are 25.10 and 3.27%, respectively. These values change to 32.06 and 3.55% when using the combined correlation and autoregressive model. However, the true value of the combined model is reflected in the coefficient of determination, which, considering errors only up to the 75% inter-quantile range, improves from 0.32 to 0.77. An example of a time-series plot comparing true and predicted demand for an arbitrarily selected 7 consecutive days for the non-summer months is provided in Fig. 10(a) (please consider the standard errors described in the paper for the error bound). Our model could not explain a significant amount of variance in the original dataset, which, based on this figure, could be due to the predicted magnitude of peaks.
A
Interactive preference learning from human binary choices is widely used in recommender systems [32, 56, 9, 21], assistive robots [54, 65], and fine-tuning large language models [59, 43, 46, 47, 5]. This process is often framed as a preference-based bandit problem [7, 31], where the system repeatedly presents queries as pairs of options, the human selects a preferred option, and the system infers preferences from these choices. Binary choices are popular because they are easy to implement and impose low cognitive load on users [74, 72, 37]. However, while binary choices reveal preferences, they provide little information about preference strength [77]. To address this, researchers have incorporated additional explicit human feedback, such as ratings [58, 50], labels [74], and slider bars [72, 5], but these approaches often complicate interfaces and increase cognitive demands [36, 37].
In this paper, we propose leveraging implicit human feedback, specifically response times, to provide additional insights into preference strength. Unlike explicit feedback, response time is unobtrusive and effortless to measure [17], offering valuable information that complements binary choices [16, 2]. For instance, consider an online retailer that repeatedly presents users with a binary query, whether to purchase or skip a recommended product [35]. Since most users skip products most of the time [33], the probability of skipping becomes nearly 1 for most items. This lack of variation in choices makes it difficult to assess how much a user likes or dislikes any specific product, limiting the system’s ability to accurately infer their preferences. Response time can help overcome this limitation. Psychological research shows an inverse relationship between response time and preference strength [17]: users who strongly prefer to skip a product tend to do so quickly, while longer response times can indicate weaker preferences. Thus, even when choices appear similar, response time can uncover subtle differences in preference strength, helping to accelerate preference learning.
This work is the first to leverages human response times to improve fixed-budget best-arm identification in preference-based linear bandits. We proposed a utility estimator that combines choices and response times. Both theoretical and empirical analyses show that response times provide complementary information about preference strength, particularly for queries with strong preferences, enhancing estimation performance. When integrated into a bandit algorithm, incorporating response times consistently improved results across three real-world datasets.
To address these challenges, we propose a computationally efficient method for estimating linear human utility functions from both choices and response times, grounded in the difference-based EZ diffusion model [67, 8]. Our method leverages response times to transform binary choices into richer continuous signals, framing utility estimation as a linear regression problem that aggregates data across multiple pairs of options. We compare our estimator to traditional logistic regression methods that rely solely on choices [3, 31]. For queries with strong preferences, our theoretical and empirical analyses show that response times complement choices by providing additional information about preference strength. This significantly improves utility estimation compared to using choices alone. For queries with weak preferences, response times add little value but do not degrade performance. In summary, response times complement choices, particularly for queries with strong preferences.
Our linear-regression-based estimator integrates seamlessly into algorithms for preference-based bandits with linear human utility functions [3, 31], enabling interactive learning systems to leverage response times for faster learning. We specifically integrated our estimator into the Generalized Successive Elimination algorithm [3] for fixed-budget best-arm identification [29, 34]. Simulations using three real-world datasets [57, 16, 39] consistently show that incorporating response times significantly reduces identification errors, compared to traditional methods that rely solely on choices. To the best of our knowledge, this is the first work to integrate response times into bandits (and RL).
A
Because our experiment features a relatively unattractive lottery in a binary choice between that lottery and a safe option, we can identify a third anchor that appears to influence CAs: (iii) some options may simply be more “objectively correct” independent of CAs’ own preferences. This implies that the projection embedded in paternalistic action is somewhat asymmetric.
This work explored the role of knowledge in paternalism. We found across two experiments that more knowledge on the side of Choosers causes a vast increase in the autonomy they are granted by impartial Choice Architects (CAs). Information helps Choosers make the right decision and CAs overwhelmingly respect that. On the other hand, a lack of knowledge is taken by CAs as a right to intervene and prevent incorrect inference. Most CAs do not wish to override the Chooser’s choice. They prefer to provide information, even when they would be able to obscure their intervention through the non-provision of knowledge. However, there is a minority of CAs that strategically abstains from providing information.
CAs were enabled to communicate to Chooser 4 the value of p𝑝pitalic_p (0.20.20.20.2) in the choice between Options 1 and 2.202020In our design, CAs had to deliberately choose whether to reveal p𝑝pitalic_p or not, as in Bartling \BOthers. (\APACyear2023). We, too, made sure that CAs’ involvement in providing information is not revealed to the Chooser. In addition, some CAs—those in the treatment Plus—were able to intervene as well.212121In this Section, we use the word “intervene” only to refer to an intervention in the choice between Options 1 and 2, although some authors view information provision as an intervention (e.g., Bartling \BOthers., \APACyear2023; Mabsout, \APACyear2022). Sections 2.1.3, 2.4 and 3.5.1 motivated this design choice. Simply put, real-life policymakers are not restricted from using multiple policy tools simultaneously (e.g., Grossmann, \APACyear2024) and they can strategically use information provision to achieve their ends.
A study related to our own is by Bartling \BOthers. (\APACyear2023). They conduct a study of paternalism in the United States, and vary the feature of the choice ecology through which a Chooser makes a mistake. They show that few CAs restrict freedom of choice, but that a substantial share of CAs provides information to Choosers. However, their design only allowed information provision or intervention as substitutes, not the joint use of both tools.
Chooser 4’s degree of knowledge is determined by the CA (Section 2.4). For this Chooser, CAs were randomly allocated to a baseline or the treatment Plus. Our treatment relates to the institutional setup of information provision for Chooser 4: In the baseline, CAs were only allowed to provide information to Choosers, as in Bartling \BOthers. (\APACyear2023). In Plus, they were enabled to intervene in the resulting Choice in addition to providing information. Simply put, in both treatments the CA can choose between cases (i) and (iii) of Section 3.1; in Plus they may also add an intervention in the resulting choice. On the other hand, in the baseline, it is a given that the Chooser’s own choice is implemented after Chooser 4 receives the information decided upon by the CA. Both information provision and—in Plus—the intervention for the Chooser took place on the same screen.
C
Recall that risubscript𝑟𝑖r_{i}italic_r start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT denotes the stochastic model reward (measured in accuracy), which takes on different values for different n𝑛nitalic_n; misubscript𝑚𝑖m_{i}italic_m start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is the amount of data contribution; and cisubscript𝑐𝑖c_{i}italic_c start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT denotes the private per-unit data cost. v⁢(⋅)𝑣⋅v(\cdot)italic_v ( ⋅ ) reflects the profit mechanism that maps a model accuracy level to a pecuniary amount. The expectation operator 𝔼ni≥1subscript𝔼subscript𝑛𝑖1\mathbb{E}_{n_{i}\geq 1}blackboard_E start_POSTSUBSCRIPT italic_n start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ≥ 1 end_POSTSUBSCRIPT codifies the information advantage possessed by the type-i𝑖iitalic_i participant—when they decide to participate, they know in advance that there is at least one party making the type-i𝑖iitalic_i commitment. The aim of each participant is to maximize their expected net profit, represented by the utility function in (1).
To ensure the contract is well designed, it must pass the first test that it gives parties of different types enough incentives to join the CML scheme. Formally, this requires that a party upon choosing the contract option designed for their type cannot be made worse off than them not participating in the CML scheme. This is known as the individual rationality (IR) conditions. To formalize the idea, we should specify the reservation utility (a.k.a. opportunity cost) for each type of parties. Here we define a party’s reservation utility to be the utility level they achieves by training a model on their own, which amounts to solving the following optimization problem:
Prior to our work, there has been a line of research that resorts to contract theory to address the incentive issue in collaborative machine learning (Kang et al. 2019; Ding, Fang, and Huang 2020; Karimireddy, Guo, and Jordan 2022; Liu et al. 2023), but most of them focus on using money as the reward for the collaboration. Karimireddy, Guo, and Jordan (2022) attends to the administration of models with different accuracy levels as rewards, while their primary focus is on the case where the scheme coordinator can directly observe each party’s data collection costs. However, in reality, the cost of contribution is typically private information known only to the contributing party. For instance, consider a CML scheme where private computing firms pull together their GPUs for the training of a language model for code generation. Each firm could face a different vendor price and incur dissimilar maintenance cost of the chips. As another example, consider the CML scheme where investment firms join their privately curated data for the training of an investment model. To gather the data, each firm needs to recruit analysts, the overheads of which are usually determined by conditions of the local labor market and the firm’s own incentive policies. The differences in the operating environments cause the parties of a CML scheme to have a wide range of per-unit contribution costs. While the scheme coordinator can be an expert in the domain field, thereby possessing some general information about the process, it remains challenging for them to gauge the exact costs borne by the parties. Even if the parties willingly inform the coordinator of their costs, the coordinator cannot verify the truthfulness of these reports without incurring significant auditing expenses. Worse still, a rent-seeking party may cheat by misreporting their cost if it leads to higher profits being gained from the scheme. This information asymmetry results in what is known as a principal-agency problem in economic literature (see Mas-Colell, Whinston, and Green 1995; Laffont and Martimort 2002; Bolton and Dewatripont 2004 for a comprehensive treatment of the subject).
In the presence of private information, optimal contract design with models as the rewards poses unique challenges that distinguish it from its economic counterparts. For one, unlike money, models are a non-rivalrous and non-exclusive good, and can be replicated and offered to the participants at a nominal cost if not free of charge. Therefore, the scheme coordinator would find it tempting to offer less capable parties a good-performing model as long as it does not cause the more capable parties to cheat. For another, the administrable model rewards are constrained by the accuracy level of the model trained using all parties’ data or computational resources. Due to incomplete information, the coordinator cannot observe the exact numbers of parties with different contribution costs in the CML scheme, and consequently cannot determine the exact accuracy level of the collectively trained model before the training completes. This makes the rewards of the contract stochastic ex-ante. The optimal contracting problem for CML needs to accommodate these challenges, whilst heeding the classical requirements of individual rationality and incentive compatibility. To this end, our paper makes the following contributions:
The detailed proof is deferred to Appendix A, and we provide here a brief intuition on offering the best model to all. The key is that the full observability of a party’s cost eliminates the possibility of cheating. Even if a party wishes to choose the option designed for another type that requires less data contribution, they can longer do so as the coordinator can embed the type into the option and easily verify a party’s eligibility at the time of contract signing. Since models are freely replicable, granting the highest rewards to parties incentivizes them to make the highest possible level of contribution while satisfying the IR condition.
A
\textbf{i}(j)-j}))\mid r_{j}=r\right]blackboard_E [ italic_s start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT italic_Y start_POSTSUBSCRIPT i ( italic_j ) end_POSTSUBSCRIPT ( italic_X start_POSTSUBSCRIPT i ( italic_j ) end_POSTSUBSCRIPT ( italic_z , bold_italic_z start_POSTSUBSCRIPT i ( italic_j ) - italic_j end_POSTSUBSCRIPT ) ) ∣ italic_r start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT = italic_r ]
(a) Expected reweighted potential treatment and outcome, 𝔼⁢[sj⁢Xi⁢(j)⁢(z,𝐳−j)∣rj=r]𝔼delimited-[]conditionalsubscript𝑠𝑗subscript𝑋i𝑗𝑧subscript𝐳𝑗subscript𝑟𝑗𝑟\mathbb{E}\left[s_{j}X_{\textbf{i}(j)}(z,\boldsymbol{z}_{-j})\mid r_{j}=r\right]blackboard_E [ italic_s start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT italic_X start_POSTSUBSCRIPT i ( italic_j ) end_POSTSUBSCRIPT ( italic_z , bold_italic_z start_POSTSUBSCRIPT - italic_j end_POSTSUBSCRIPT ) ∣ italic_r start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT = italic_r ]
IV regression is exactly unbiased, as 𝔼⁢[Yi∣rj]𝔼delimited-[]conditionalsubscript𝑌𝑖subscript𝑟𝑗\mathbb{E}\left[Y_{i}\mid r_{j}\right]blackboard_E [ italic_Y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∣ italic_r start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ] is
\textbf{i}(j)-j}))\mid r_{j}=r\right]blackboard_E [ italic_s start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT italic_Y start_POSTSUBSCRIPT i ( italic_j ) end_POSTSUBSCRIPT ( italic_X start_POSTSUBSCRIPT i ( italic_j ) end_POSTSUBSCRIPT ( italic_z , bold_italic_z start_POSTSUBSCRIPT i ( italic_j ) - italic_j end_POSTSUBSCRIPT ) ) ∣ italic_r start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT = italic_r ]
(b) Expected importance weights 𝔼⁢[sj∣rj=r]𝔼delimited-[]conditionalsubscript𝑠𝑗subscript𝑟𝑗𝑟\mathbb{E}\left[s_{j}\mid r_{j}=r\right]blackboard_E [ italic_s start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ∣ italic_r start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT = italic_r ] and the
D
The neutrality and strategyproofness axioms exclude the possibility of averaging rules, in which aggregate endpoints are averages of individual endpoints.
Translation equivariance is much weaker than neutrality; for example, it allows the rule to make use of the cardinality properties of the real line.
Neutrality is a reasonable justification in the contexts in which averaging does not make much practical sense—for example, when taking a numerical average would not be particularly meaningful.
When preferences can be observed, there is the general difficulty of aggregating preferences in a meaningful way.
Community standards do not necessarily exist to serve a consequentialist goal.252525While the reasonable person standard may exist to reduce the cost of accidents, this is not a universally accepted goal. In the context of obscenity, offense to community standards is often the justification (and not merely the test) for criminal prosecution.
B
However, that study required the budget sets to be statistically independent of preferences and that exogenous productivity growth was uniform across years and individuals.
This paper introduces a novel approach for estimating the average of heterogeneous elasticities of taxable income in the presence of endogenous, nonlinear budget constraints and individual specific productivity growth using panel data.
The results we give here address these issues by using panel data to allow for endogenous budget sets and heterogeneous productivity growth.
We allow for individual heterogeneity while also controlling for budget set endogeneity by using panel data.
The dependent variable is the logarithm of taxable income, represented by the household’s labor income. We derived a parsimonious specification based on four key variables to capture the budget set on the right-hand side: the last-segment slope and virtual income, both in logarithms, and their differences from the first-segment slope and virtual income. To calculate these variables, we constructed the complete budget set for each household using the NBER TAXSIM calculator (Feenberg and Coutts, 1993). A range of income levels was run through TAXSIM to obtain federal and state marginal tax rates, which were used to construct the slopes and kink points of the household budget sets. These tax data account for federal and state income taxes, as well as payroll taxes.
B
Choosers participate in a two-day survey. At some point during these two days—as determined by treatment, see below—Choosers play the Bomb Risk Elicitation Task (BRET, Crosetto \BBA Filippin, \APACyear2013) with the highest stakes ever reported in the literature.
We have Choosers participate in a simple risky decision: how many boxes to open (Crosetto \BBA Filippin, \APACyear2013). In this experiment, one randomly chosen box out of twenty-five contains a “curveball” that eliminates all earnings. Because each non-curveball box earns the Chooser $20, the experiment has very high stakes.
The Cambridge Dictionary defines a rule as “an accepted principle or instruction that states the way things are or should be done, and tells you what you are allowed or are not allowed to do.”111Source: https://dictionary.cambridge.org/dictionary/english/rule, accessed January 1, 2025.
Our BRET works as follows: Choosers are faced with 25 boxes. Each box contains $20 to be collected by the Chooser. They can open whichever and as many boxes as they like, but one randomly selected box contains a “bomb.”444In our experiment, the word “curveball” was used instead of “bomb,” because the word “bomb” can carry negative associations. The Cambridge Dictionary lists the metaphorical use of the word “curveball” as implying, in American English, “something such as a question or event that is surprising or unexpected, and therefore difficult to deal with” (https://dictionary.cambridge.org/dictionary/english/curveball, accessed January 1, 2025).
If the one box containing the “bomb” is opened, all earnings are eviscerated, leaving the Chooser with no payment from the BRET. Choosers learn these rules on day 1 of the survey.
C
Quite different behavior emerges in models where utility arises directly from consuming goods together, and such complementarities are central to our examples of recoverable structure. A leading practical example comes from the use of computers: a consumer’s utility from a computer depends on the hardware, operating system, and applications. Two firms selling distinct components—a hardware device and an operating system, for instance—supply complementary goods, while two firms selling the same component (say, operating systems) supply substitute goods
For an economic intuition, note that in the Lancaster (1966) type of model, the “direct” relationship between any pair of goods is substitution. With substitution, if some demand is diverted from one good due to an increase in its price, the total effect on all substitute goods is bounded, since, loosely speaking, the demand gained by these other goods must come out of the demand lost by the more expensive one. This bounds the sum of positive entries in 𝑫𝑫\bm{D}bold_italic_D corresponding to this effect, which in turn bounds any complementarities in the Slutsky matrix.323232Note that complementarity (where one good’s demand decreases in the price of the other) can arise in Pellegrino (2021) model. This happens through indirect effects: the substitute of my substitute can be my complement. However, since the “direct” substitution effect is bounded in magnitude, so are the indirect consequences. In essence, in a hedonic model where the basic force is substitution, overall spillovers remain bounded, and the fact that 𝑫𝑫\bm{D}bold_italic_D has no large eigenvalues is the mathematical manifestation of this.
The statistical implications of recoverable structure are then central to actually taking advantage of this potential when the Slutsky matrix is observed imperfectly. The authority’s signal consists of noisy estimates of the entries of this matrix, with noise magnitudes in each entry comparable to the entries themselves. This noise creates large uncertainty in the operation of a given intervention. We show that, nevertheless, in large markets with recoverable structure, such noisy observation of 𝑫𝑫\bm{D}bold_italic_D can be used to precisely predict the effects of some well-chosen interventions—specifically, those operating in the space of eigenvectors associated with the largest eigenvalues of 𝑫𝑫\bm{D}bold_italic_D. The key tool for this is the Davis–Kahan theorem (Davis and
Regibeau, 1988, 1992). Our illustrative Example 2 in Section 5 shows how Slutsky matrices with large eigenvalues arise naturally in such settings.333333The complementarities there happen to be within-category, but that is not important for our point here. But, as we have seen, it is impossible to produce the same patterns in models of the Lancaster (1966) type, because they cannot generate large eigenvalues; one would need to incorporate terms reflecting that some characteristics provide greater value when enjoyed together. There is a straightforward economic intuition for why such complementarities more readily produce recoverable structure: when one good’s price decreases, all its complements can experience comparable nonvanishing increases in demand. This creates the clusters of nonvanishing entries in 𝑫𝑫\bm{D}bold_italic_D that are the hallmark of recoverable structure.
The demand structure is encoded in a matrix 𝑫𝑫\bm{D}bold_italic_D of demand derivatives, which in our setting is equal to the Slutsky matrix. A given cell Di⁢jsubscript𝐷𝑖𝑗D_{ij}italic_D start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT in this matrix is the derivative of product i𝑖iitalic_i’s demand with respect to product j𝑗jitalic_j’s price. Thus, the matrix specifies the complementarity and substitutability relationships across products. Mathematically, the recoverable structure property requires that 𝑫𝑫\bm{D}bold_italic_D can be written as a rank-one matrix with large norm plus a matrix orthogonal to this. This rank-one piece can be thought of as a large principal component: a part of the demand system described by a single vector that accounts for a large amount of demand behavior. In terms of the economic intuition, we will show that recoverable structure entails substantial large-scale complementarities. This manifests as the ability of small subsidies to have large spillover effects that raise the consumption of many goods by significant amounts.
C
The exclusion restriction in this case asserts that assignment to the invasive arm affects outcomes solely by increasing the likelihood of revascularization.444Exclusion is formalized by double indexing potential outcomes as in Angrist, Imbens and Rubin [1996]. Let Yw⁢(t,z)subscript𝑌𝑤𝑡𝑧Y_{w}(t,z)italic_Y start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT ( italic_t , italic_z ) denote a participant’s wave-w𝑤witalic_w potential outcome given t𝑡titalic_t years of exposure and assignment z𝑧zitalic_z. The exclusion restriction says that Yw⁢(t,z)=Yw⁢(t,z′)subscript𝑌𝑤𝑡𝑧subscript𝑌𝑤𝑡superscript𝑧′Y_{w}(t,z)=Y_{w}(t,z^{\prime})italic_Y start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT ( italic_t , italic_z ) = italic_Y start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT ( italic_t , italic_z start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) for each t≤w𝑡𝑤t\leq witalic_t ≤ italic_w, z𝑧zitalic_z, and z′≠zsuperscript𝑧′𝑧z^{\prime}\neq zitalic_z start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ≠ italic_z. This assumption is plausible in the ISCHEMIA trial, since randomization to the invasive treatment likely had no direct effects on outcomes. Importantly, the exclusion restriction allows assignment to the invasive treatment to affect outcomes via the timing of revascularization.
Under Assumptions 1-3, a simple Wald-type IV estimand using wave-1 data identifies the average causal effect of one year of revascularization exposure for wave-1 compliers. Specifically, in wave 1, revascularization exposure, T1subscript𝑇1T_{1}italic_T start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, is a Bernoulli treatment that indicates participants revascularized shortly after random assignment. The Imbens and Angrist [1994] local average treatment effect (LATE) theorem applied to wave-1 data therefore implies that:
Given an exclusion restriction, random assignment makes Z𝑍Zitalic_Z independent of potential outcomes and potential treatments. This independence assumption is formalized as:
As in Imbens and Angrist [1994] and Angrist and Imbens [1995], we assume that invasive assignment either induces revascularization, makes revascularization happen sooner, or leaves revascularization exposure unchanged. This is formalized as:
IV analysis of longer-run effects in the ISCHEMIA trial is complicated by time-varying exposure in a model with heterogeneous potential outcomes. As in Angrist and Imbens [1995] and Rose and Shem-Tov [Forthcoming], the principle complication here arises from the fact that compliance occurs along an extensive margin in which participants who would never have revascularized are induced to revascularize by invasive assignment and an intensive margin in which assignment induces earlier revascularization among participants who would have revascularized anyway. Consequently, complier populations differs for each exposure level and change over time. At the same time, the availability of repeated follow-ups (waves) gives us a handle on this problem that’s not been fully exploited in previous applications of IV to models with dynamic effects.
C
We will restrict attention on cases where G>0𝐺0G>0italic_G > 0. The concavity of g⁢(Rb)11−θ2𝑔superscriptsubscript𝑅𝑏11subscript𝜃2g(R_{b})^{\frac{1}{1-\theta_{2}}}italic_g ( italic_R start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT divide start_ARG 1 end_ARG start_ARG 1 - italic_θ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT end_ARG end_POSTSUPERSCRIPT guarantees the uniqueness of the solution in what follows. We emphasize again that, as K𝐾Kitalic_K is a control variable in our model, our method allows for the functions fisubscript𝑓𝑖f_{i}italic_f start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT to be strictly concave. We next specify and characterize the normative and positive arrangements that will be considered henceforth.222222The above conditions will imply the existence and uniqueness of a solution to the auxiliary problem and the uniqueness of solutions to the social planners’ problem as well as the uniqueness of open-loop Nash equilibria for the cases we will investigate next. Although we assume differentiability throughout for expository purposes, our results do not require differentiability and can be demonstrated using convex optimization techniques.
In what follows, we will illustrate the ITM method in the context of the analytical integrated assessment model in section 3. We first consider two normative benchmarks by characterizing the solutions to two social planner problems. We then study the Nash equilibria of a suitable non-cooperative dynamic game. We will discuss the reformulation given in Proposition 2.2, which lies at the heart of the ITM, in some detail in the context of the “global planner” problem. As the same steps apply, we will skip the details for the other cases.
The analytical advantages of the ITM over standard optimization methods can be exploited in a variety of dynamic models in economics and beyond. In this section, we will illustrate the method in the context of an application to climate economics. Provided that the conditions for the applicability of the ITM hold, a variety of climate models could be used for the illustration. Here, we will employ a version of the integrated assessment model in Golosov et al., (2014). Our analysis will extend their basic model in several directions that might be of independent interest, including introducing multiple heterogenous regions, technological progress, strategic considerations, and deep (Knightian) uncertainty.
When applied to this framework, our solution method allows us to reduce the computation of the Nash equilibria of the dynamic game to the solution of temporary games indexed by time. Open-loop Nash equilibria computed by our method are also MPE. The uniqueness of MPE in the class of affine feedbacks is also discussed. To obtain comparisons between equilibrium outcomes and the efficient frontier, we distinguish between a social planner problem without country sovereignty constraints, where a “global planner” can relocate production from one country to another, as well as the more realistic case of a “restricted planner,” who is subject to a resource constraint for each country. In the special case of logarithmic utility, linear production function, and a non-linear abatement function, we derive various comparisons between the equilibrium and the efficient values of variables of interest, such as consumption, abatement effort, and transfers between the two countries. We then use a numerical example to illustrate the role of heterogeneity in time-discounting and climate vulnerability between the two countries, as well as the role of the intertemporal elasticity of substitution. Finally, we demonstrate how the ITM can be applied in a robust control framework; see, for example, Hansen and Sargent, (2008), as well as in a game-theoretic framework in order to investigate the effects of uncertainty on various non-cooperative equilibrium outcomes. We find that when the marginal abatement efficiency gains are small relative to the marginal emissions created by production, it is not efficient to subsidize abatement in the global south. Under logarithmic payoffs we find that in the Nash equilibrium there is over-consumption both in the global north and (provided that technological differences between the two are not too large) in the global south. The global south receives lower abatement-technology transfers and under-invests in abatement relative to the social optimum. Both global emissions and welfare are lower as a result. Our numerical example points to some interesting implications of heterogeneous climate vulnerability. If the global south is more vulnerable to climate-related damages, then the
The paper proceeds as follows. After a brief literature review, Section 2 contains a formal treatment of the ITM. Section 3 introduces the application of the ITM to an analytical integrated assessment model. Section 4 studies the non-cooperative outcomes and two normative benchmarks, while Section 5 investigates a numerical example. In Section 6 we introduce Knightian uncertainty and apply the ITM in the context of robust control. A brief conclusion follows. The Appendices contain the details of the proofs, as well as additional findings derived for special cases of interest.
A
Monthly indicators related to the category Output and income (Group 1) remain just as important as before but we cannot see any features related to the Labor market among the ten most important contributors.
Their place is taken by those monthly indicators listed in the FRED-MD category Consumption, orders and inventories (Group 4, in Table A.4).
Five of the most important input sequences ending in June 2020:M6 are related to the labor market (Group 2 in Table A.2), while four of them are included in the category called Output and income (Group 1 in Table A.1).
For the training of the different ANNs, we use all the monthly indicator series included in the FRED-MD database.
Depending on the information set based on which nowcasts are generated, we use every third FRED-MD vintage until the end of the evaluation period.212121Table B.1 in Appendix B reports the monthly timestamps of those FRED-MD vintages corresponding to each intra-quarterly information set.
A
For general results on the construction of a (countable or even uncountable) product probability space,
the expectation with respect to ℙ0,1superscriptℙ01\mathbb{P}^{0,1}blackboard_P start_POSTSUPERSCRIPT 0 , 1 end_POSTSUPERSCRIPT is simply denoted by 𝔼⁢[⋅]𝔼delimited-[]⋅\mathbb{E}[\cdot]blackboard_E [ ⋅ ]. The financial market is specified as follows.
which the process Rpsuperscript𝑅𝑝R^{p}italic_R start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT is given by
Notice that the process θmfgsuperscript𝜃mfg\theta^{\rm mfg}italic_θ start_POSTSUPERSCRIPT roman_mfg end_POSTSUPERSCRIPT (and hence also μ𝜇\muitalic_μ) is 𝔽0superscript𝔽0\mathbb{F}^{0}blackboard_F start_POSTSUPERSCRIPT 0 end_POSTSUPERSCRIPT-adapted and consistent with our assumption on the information structure. In particular, this means that each agent (agent-i𝑖iitalic_i) can implement her strategy based on the common
Suppose that the financial market is defined as in Assumption 3.1 with the process μ𝜇\muitalic_μ given by
D
Clearly, the AFC and CAF teams won several matches in the last round of the group stage against European countries that have probably already qualified for the knockout stage. No similar trend can be observed for South American teams, which are perhaps less prone to strategic behaviour. Table 6 suggests that the problem of incentives cannot be neglected in the analysis of FIFA World Cup group matches.
The results above are worth comparing with the findings of Krumer and Moreno-Ternero, (2023), although this is not so straightforward since they also investigate five methods. Furthermore, an important novelty of the current study is our “look into the past” in the sense that slot allocations are investigated if the expansion would have been decided before.
Inspired by the recent expansion to 48 teams, Krumer and Moreno-Ternero, (2023) explore the allocation of additional slots among continental confederations by using the standard tools of the fair allocation literature. The “claims” of the continents are based on the FIFA World Ranking and the World Football Elo Ratings (http://eloratings.net/) that are summed for all member countries or just for countries being in the top 48. They also consider the average annual number of teams in the top 31 and the average annual number of teams ranked 32–48. In contrast, our approach exclusively depends on the results of the national teams that have played in the FIFA World Cup and its inter-continental play-offs. Therefore, neither friendlies, nor matches played in continental championships and qualifications affect the allocation of FIFA World Cup slots proposed here since these games provide either unreliable (friendlies) or no information on the relative strengths of the regions.
Figures 1 and 2 present how the number of slots would have evolved for the five confederations if the database had been finished after one of the last eight FIFA World Cups. Figure 1 compares Seeded sets S0 and S1 under the three different update frequencies, while Figure 2 repeats this analysis for four (S1) and eight (S2) seeded nations. CONMEBOL strongly benefits if the performance of Argentina and Brazil are taken into account as can be seen in Figure 1. Nonetheless, South America almost always receives a quota above 10 (the number of its members) if the set of matches is extended at least to the 2010 FIFA World Cup. In addition, the minimum is still above 6.5, which exceeds the number of berths provided by FIFA (see Table 2).
The methodology proposed here can be used to allocate the qualifying slots in a transparent manner. Compared to Krumer and Moreno-Ternero, (2023), an important novelty of our study is the “look into the past”: historical slot allocations are also presented assuming that the expansion to 48 teams would have been implemented before. Furthermore, since the suggestion is based on a reasonable extension of the official FIFA World Ranking, it might be more easily accepted by the stakeholders (officials, players, coaches, TV broadcasters, and fans) than the complex methods of fair allocation proposed by Krumer and Moreno-Ternero, (2023), which depend to a great extent on how proportionality is defined.
A
”Temporary Childcare Service” is an indicator variable equal to 1 if the center provides short-term childcare services.
”Temporary Childcare Service” is an indicator variable equal to 1 if the center provides short-term childcare services.
”Third Party Review” is an indicator variable equal to 1 if the center has undergone an external evaluation.
”Third Party Review” is a binary variable equal to 1 if the center has undergone an external evaluation.
”Education and Care Preschool” is an indicator variable equal to 1 if the center is certified as combining daycare and educational functions.
B
We concluded that unequal investment skills with constant return rates provides a less accurate explanation of empirical observations, but more refined research could be done here, including a more sophisticated model for the fitness of agents.
and the stable fixed points of G𝐺Gitalic_G are the downcrossings of the ”r=0𝑟0r=0italic_r = 0”-field
Figure 3: The line g𝑔gitalic_g (see 8) and the field G0subscript𝐺0G_{0}italic_G start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT (see (7)) against wealth x𝑥xitalic_x of agent 1. ∙∙\bullet∙ marks stable and ∘\circ∘ unstable fixed points. The arrows indicate the direction of the field G𝐺Gitalic_G.
Finally, our heuristics on the structure and number of stable and unstable fixed points of the driving field G𝐺Gitalic_G (Section 3) for moderate r𝑟ritalic_r could be completed by a rigorous treatment in the future.
In Section 2, we will formally introduce the model and present a few rigorous results concerning the long-time behaviour using the method of stochastic approximation (see e.g. [28, 31]). In Section 3, we discuss the cases of A=2𝐴2A=2italic_A = 2 and A=3𝐴3A=3italic_A = 3 agents, in order to gain a visual understanding of the different regimes of the process. In Section 4, we fit the model parameters to available data and simulate the process for different initial configurations. We compare the simulated wealth distribution to the data from Figure 1 and take a look at some other inequality indicators in order to reveal advantages and disadvantages of the proposed model. Moreover, we formulate predictions for the future based on our model. In Section 5, we use our model to discuss if different investment skills provide an alternative explanation for the gap between wealth and wage distribution. Finally, in Section 6, we bring together our numerical and theoretical findings and discuss the effect of the recent increase of interest rates on the future of inequality within our model.
C
Some students assigned treatment Z=1𝑍1Z=1italic_Z = 1 (viewed as an instrument below) did not engage with the program either by checking their earnings or making contact with the program advisor.
The efficiency differences between designs are more pronounced for the heterogeneity variables CATE and CLATE than for average effects SATE and LATE.
The authors view this as noncompliance with the instrument Z𝑍Zitalic_Z and estimate both intention-to-treat (ITT) effects and effects on compliers (LATELATE\operatorname{LATE}roman_LATE).
Next, consider a setting where the researcher wants to estimate a parametric model of treatment effect heterogeneity in an experiment with noncompliance and randomized binary instrument Z𝑍Zitalic_Z.
For xi=Fisubscript𝑥𝑖subscript𝐹𝑖x_{i}=F_{i}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = italic_F start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, this has a simple interpretation as the difference in ITT effects between students with and without financial stress:
B
Artificial neural networks (ANN) are the simplest type of neural networks. They consist of multiple layers and neurons and can be used for regression and classification tasks. Each node has an activation function that transforms the incoming input and passes it on to the next layer. Common activation functions are sigmoid, tanh, ReLU, and linear, and depending on the task, different functions are picked. Training an ANN involves adjusting weights using algorithms like backpropagation to minimize the difference between predicted and actual outputs.
LSTMs are a type of recurrent neural network and function very similarly to the human brain. They are designed to address the vanishing gradient problem in traditional RNNs, allowing them to better capture time dependencies in sequential data, and they achieve this through the use of gated units called memory cells that can maintain information over time. Figure 8 represents a single LSTM neuron, and shows how the information is processed.
RNNs introduce recurrent connections, allowing information to persist and be shared across different time steps in sequential data, where temporal dependencies are crucial. These models are able to handle variable-length sequences, allowing them to adapt dynamically to different lengths of input data. Their architectures can be extended and optimized with variations such as Long Short-Term Memory (LSTM) and Gated Recurrent Units (GRU), which improve the model’s ability to retain and utilize information over longer sequences.
The integration of HMM features, such as hidden states and means, significantly enhanced the predictive performance of LSTM models for forecasting. By utilizing the latent state information provided by HMMs, the LSTM models were able to capture subtle shifts in the economy, resulting in more accurate and robust predictions. While the models demonstrated strong performance in short-term predictions, their effectiveness over longer forecasting horizons was somewhat limited. Further modeling, refinement, and experimentation with different economic features could be done to create stable long-term forecasts.
Neural networks are brain-inspired supervised learning models made with interconnected nodes and layers to learn patterns and make predictions from data. Unlike linear models, they are able to capture non-linear relationships between the target and features; however, this leads to their biggest drawback, interpretability. Neural networks are black-box models, which means that the user only sees what goes in and what comes out, which makes them difficult to interpret compared to traditional models. Neural networks have seen a rise in usage in Economics, but due to their lack of interpretability, they get overshadowed by simpler models. There are different types of neural networks, and this paper focuses on Recurrent Neural Networks (RNN), specifically Long Short-Term Memory (LSTM) networks.
B
This definition includes all types of capital income recorded in national accounts which should be attributed to resident individuals.
They are, however, often excluded from personal income definitions in surveys or fiscal data used to calculate inequality series while included in the calculation of capital shares from national accounts.
For instance, owner occupied housing income as well as all types of financial incomes, whether they are directly received in the form of dividends by households or retained in corporations in forms of undistributed profits, are included.
In doing so, the calculated top income shares include all forms of income recorded in the national accounts and are not subject to the conceptual differences seen in traditional microdata.
First, total national income, as defined by the System of National Accounts (SNA), includes income flows beyond household incomes, such as undistributed corporate profits and imputed rents for owner-occupied housing.
B
The estimation results for the preference parameters (β𝛽\betaitalic_β and ΣΣ\Sigmaroman_Σ) are presented in Table 4. The estimated slope coefficients for price and the organic indicator in the proposed approach have reasonable signs and magnitudes, aligning closely with the results from the BLP approach. Both approaches also provide significant evidence of dispersion in the random coefficients for these variables. Additionally, well-known brands, such as Chobani, Fage Total, and Stonyfield Organic Oikos, exhibit relatively larger brand fixed effects in the consumer utility function. Finally, the random coefficient model (either Bayesian or BLP estimates) implies a more elastic demand compared to the model without random coefficients, as indicated by the last two rows of the table. This difference is primarily driven by the dispersion in the random coefficients on price, which captures heterogeneity in consumer sensitivity to price changes.
Monte Carlo simulation results show that, when the sparsity assumption holds in the data generating process (DGP), our approach performs similarly to the BLP estimator with strong IVs and outperforms the BLP estimator with potentially weak IVs. This supports our theoretical results on identification. Additionally, we examine cases where the demand shocks are not strictly sparse but exhibit approximate sparsity in the DGP, and find that our Bayesian shrinkage estimator still performs reasonably well in estimating the preference parameters, demonstrating robustness to mild misspecifications.
The estimation results for the preference parameters (β𝛽\betaitalic_β and ΣΣ\Sigmaroman_Σ) are presented in Table 7. For the mean random coefficients, our Bayesian shrinkage approach yields estimates with reasonable signs and magnitudes, closely aligning with those of the standard BLP estimates. Regarding the standard deviations (SDs) of random coefficients, the Bayesian shrinkage approach indicates considerable dispersion for all random coefficients, suggesting rich heterogeneity in consumers’ tastes across all product characteristics. In contrast, several SDs from the BLP estimates, including those for weight, size, power steering, and automatic transmission, are virtually zero. These near-zero estimates may be attributed to the weak IV problem, as highlighted by Reynaert \BBA Verboven (\APACyear2014). Furthermore, while the BLP estimator is sensitive to the choice of IVs - based on our experiments with the data, though specific results are not reported here - our Bayesian shrinkage approach is immune to this issue, making it a particularly advantageous tool in practice.
Overall, our Bayesian approach produces similar results to the BLP approach in this case. We emphasize that our Bayesian shrinkage approach does not rely on IVs, and the agreement between the two approaches here validates the BLP results that rely on IVs. However, such agreement is not guaranteed in general; in cases where the two approaches diverge, it becomes essential to assess which underlying assumption - sparsity or the validity of IVs - is more plausible in the specific context.
In general, the sparsity assumption is neither stronger or weaker than the conditional mean restriction (6). The assumption (6) does not restrict the form of price endogeneity; however, it requires valid IVs. On the contrary, the sparsity assumption imposes certain restrictions on the form of price endogeneity; however, it avoids the need for IVs completely.
C
Our goal is to find the optimal value of α𝛼\alphaitalic_α to achieve the maximum objective function value for the firm over different observation periods. For simplicity, we will compare a shorter-term observation (of 3 years) with a longer-term observation (of 6 years).
By analyzing these different scenarios, we can observe how different levels of investment and urgency in transitioning to low-carbon technologies affect key variables of the framework over time. This analysis helps us understand the trade-offs between immediate costs and long-term benefits, guiding enterprises in making informed decisions about their decarbonization strategies. The goal is to determine the optimal value of α𝛼\alphaitalic_α that maximizes the overall profit across multiple periods, ensuring both short-term (3-year observation period) profitability and long-term (6-year observation period) sustainability. We will not consider the scenario of ‘No Decarbonization’ in this part, as there is no need to determine the optimal transitional investment ratio α𝛼\alphaitalic_α under this scenario.
By using this multi-period framework, we aim at finding the optimal value of α𝛼\alphaitalic_α that maximizes the overall profit of the firm across multiple periods. The multi-period model is particularly useful for understanding the long-term implications of transition investments for the firm. Transitional investments typically incur immediate costs but generate substantial future benefits for the firm. For example, investments in low-carbon technologies might initially increase production costs for the firm due to the capital expenditure required for new equipment or processes. However, over time, these investments can lead to substantial efficiency gains, cost reductions, and improved regulatory compliance, which enhance future profits of the company.
Short-Term Observation (3 years): This period allows us to understand the immediate and near-future impacts of transition investments for the firm. It is suitable for enterprises looking for quick wins and immediate adjustments in their strategies.
Our goal is to find the optimal value of α𝛼\alphaitalic_α to achieve the maximum objective function value for the firm over different observation periods. For simplicity, we will compare a shorter-term observation (of 3 years) with a longer-term observation (of 6 years).
C
Example: Given the same multivariate datasets used above, for any two variables of interest, say Xt−τisuperscriptsubscript𝑋𝑡𝜏𝑖X_{t-\tau}^{i}italic_X start_POSTSUBSCRIPT italic_t - italic_τ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT, Xjtsuperscriptsubscript𝑋𝑗𝑡X_{j}^{t}italic_X start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT and Z𝑍Zitalic_Z being the remainder of the variables in the data which could be a potential collider or common cause of variable pair in question. where Z=Xt−τ−k1,Xt−1:t−τ−k2,Xt:t−τ−k3𝑍superscriptsubscript𝑋𝑡𝜏𝑘1superscriptsubscript𝑋:𝑡1𝑡𝜏𝑘2superscriptsubscript𝑋:𝑡𝑡𝜏𝑘3Z=X_{t-\tau-k}^{1},X_{t-1:t-\tau-k}^{2},X_{t:t-\tau-k}^{3}italic_Z = italic_X start_POSTSUBSCRIPT italic_t - italic_τ - italic_k end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT , italic_X start_POSTSUBSCRIPT italic_t - 1 : italic_t - italic_τ - italic_k end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT , italic_X start_POSTSUBSCRIPT italic_t : italic_t - italic_τ - italic_k end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT.
Following these illustrations, we can see variations in the results obtained in both GC cases. BVGC and MVGC did brilliantly well, complementing each other’s shortcomings. In the case of the true causal links, both BVGC and MVGC fulfil both propositions 1111 and 2222, hence the discovery of true causal links (in colour blue in the inferences column). In larger datasets with more than three variables, it is logical to see that true causal links can be inferred when propositions 1111 and 2222 are fulfilled. With this establishment, we then introduce our algorithm.
Table 1: Illustrating BVGC and MVGC expected results on data and how our proposition identifies true causal links. False positives are depicted in green, blue indicates correctly identified links (i.e., true positives), and the red variables indicate conditioned variables.
From the above analogies, a combination of dependencies from both propositions 1111 and 2222 must hold in other to infer a causal link between two variables. The propositions are as follows:
Proposition 3: Combining222Combination operation using the logical-and symbolized as ∧\land∧ propositions 1111 and 2222 enforces all RCCPs and reveals only true causal links.
D
𝖵𝖢=(𝖦𝖾𝗇,𝖣𝗂𝗀𝖾𝗌𝗍,𝖮𝗉𝖾𝗇,𝖵𝖿)𝖵𝖢𝖦𝖾𝗇𝖣𝗂𝗀𝖾𝗌𝗍𝖮𝗉𝖾𝗇𝖵𝖿{\sf VC}=({\sf Gen},{\sf Digest},{\sf Open},{\sf Vf})sansserif_VC = ( sansserif_Gen , sansserif_Digest , sansserif_Open , sansserif_Vf ).
𝖼𝗈𝖽𝖾=𝖱𝖲.𝖤𝗇𝖼𝗈𝖽𝖾⁢(𝐜)formulae-sequence𝖼𝗈𝖽𝖾𝖱𝖲𝖤𝗇𝖼𝗈𝖽𝖾𝐜{\sf code}={\sf RS}.{\sf Encode}({\bf c})sansserif_code = sansserif_RS . sansserif_Encode ( bold_c )
𝖱𝖲.𝖱𝖾𝖼𝗈𝗇𝗌⁢(𝖼𝗈𝖽𝖾′)=𝐜~formulae-sequence𝖱𝖲𝖱𝖾𝖼𝗈𝗇𝗌superscript𝖼𝗈𝖽𝖾′~𝐜{\sf RS}.{\sf Recons}({\sf code}^{\prime})=\widetilde{{\bf c}}sansserif_RS . sansserif_Recons ( sansserif_code start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) = over~ start_ARG bold_c end_ARG.
𝖼𝗈𝖽𝖾:=𝖱𝖲.𝖤𝗇𝖼𝗈𝖽𝖾⁢(𝐜)formulae-sequenceassign𝖼𝗈𝖽𝖾𝖱𝖲𝖤𝗇𝖼𝗈𝖽𝖾𝐜{\sf code}:={\sf RS}.{\sf Encode}({\bf c})sansserif_code := sansserif_RS . sansserif_Encode ( bold_c ).
𝖱𝖲=(𝖤𝗇𝖼𝗈𝖽𝖾,𝖱𝖾𝖼𝗈𝗇𝗌)𝖱𝖲𝖤𝗇𝖼𝗈𝖽𝖾𝖱𝖾𝖼𝗈𝗇𝗌{\sf RS}=({\sf Encode},{\sf Recons})sansserif_RS = ( sansserif_Encode , sansserif_Recons ).
D
Hypothesis, H𝐻Hitalic_H: Cognitive diversity, defined as individuals seeing problems and making predictions based on different models of the world, is a group property that improves group performance in various contexts.
Hypothesis’, H′superscript𝐻′H^{\prime}italic_H start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT: Greater cognitive diversity within a group correlates with better problem-solving and prediction abilities.
Hypothesis 1, H1subscript𝐻1H_{1}italic_H start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT: The best agent cannot always solve the problem.
Hypothesis 2, H2subscript𝐻2H_{2}italic_H start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT: The “diverse group” can always solve the problem (or141414See Remark D.2 and Appendix A. outperform the best performing agent).
be expected to be correlated with greater cognitive diversity, which, in turn, is correlated with better problem-solving and prediction. A central assumption of the argument is that politics is characterized by uncertainty. This uncertainty (which is an assumption about the world, not necessarily the subjective epistemic stage of the deliberators) is what renders all-inclusiveness on an equal basis epistemically attractive as a model for collective decision-making. Given this uncertainty egalitarian inclusiveness is adaptive or “ecologically
A
Considering a network model consisting of around 2,700 nodes, Brouhard et al., (2023) design nine zone delineations for Continental Europe by implementing K-Means and hierarchical clustering.
The authors calculate LMPs for multiple, independent single-period optimization problems (i.e., optimal power flow with direct current approximation), and use geographic coordinates and these prices as clustering features.
We evaluate the clusterings computed for each year using (1) time series of nodal prices as features, and (2) time series of nodal prices and the location coordinates of nodes.
Tables 4, 4 and 4 show the average price standard deviations for different configurations with 1, 2, 3 or 4 price zones or clusters. The configurations refer to (1) a single price zone (column ‘Single price zone’), (2) the configurations proposed by ACER (columns ACER DE2 (k-means), DE2 (spectral), DE3 (spectral), DE4 (spectral)) and (3) the optimal configurations computed with K-Means and Spectral Clustering based on nodal prices (columns ‘K-Means’ and ‘Spectral Clustering’).
This behavior can also be observed when latitude and longitude are included as features and is independent of the number of clusters or clustering algorithm.
A
We now extend the 2×2222\times 22 × 2 DID-IV design to multiple period settings with the staggered adoption of the instrument across units (Black et al. (2005); Bhuller et al. (2013) Lundborg et al. (2014); and Meghir et al. (2018)).
We call it a staggered DID-IV design, and establish the target parameter and identifying assumptions.
In this section, we formalize an instrumented difference-in-differences (DID-IV) in two-period/two-group settings. We first establish the target parameter and the identifying assumptions in this design. We then investigate the treatment adoption behavior across units, and clarify the interpretation of the parallel assumption in the outcome. Finally, we describe the connections between DID-IV and Fuzzy DID proposed by de Chaisemartin and D’Haultfœuille (2018).
In this subsection, we establish the identification assumptions in staggered DID-IV designs. These assumptions are the natural generalization of Assumptions 1-6 in 2×2222\times 22 × 2 DID-IV designs.
Next, we consider the DID-IV design in more than two periods with units being exposed to the instrument at different times. We call this a staggered DID-IV design, and formalize the target parameter and identifying assumptions. Specifically, in this design, our target parameter is the cohort specific local average treatment effects on the treated (CLATT). The identifying assumptions are the natural generalization of those in 2×2222\times 22 × 2 DID-IV designs.
A
Regulatory Action Threshold: Governments should prioritize policies that facilitate AI adoption to at least 60% in urban mobility systems, as this marks the point where congestion significantly decreases.
AI Penetration Threshold: The model identifies 50% adoption as the threshold for seeing significant congestion reductions. Policymakers should create long-term strategies to drive AI adoption toward this level, using tools such as public sector leadership in deploying AI-based mobility solutions.
Incentives for Early Adoption: To reach this adoption threshold, policies could include tax incentives for AI-integrated transport solutions and subsidies for autonomous vehicle infrastructure.
Innovation Incentives: Strong regulatory support should be accompanied by incentives that accelerate AI adoption. Innovation grants, research funding, and public-private partnerships can help overcome the technical challenges slowing adoption.
Weak regulatory support results in a lack of clear direction for AI and transportation policy. Governments fail to provide the incentives or frameworks necessary for AI adoption, leading to disjointed and inconsistent progress across regions. Without national or international coordination, cities are left to their own devices, resulting in fragmented policies that neither support innovation nor address critical issues like sustainability and data governance.
B
The remainder of the paper is organized as follows. Section 2 describes the data sources and variables used in our analysis. Section 3 outlines the different steps of our nowcasting methodology and the models proposed. The results of the empirical analysis are presented in Section 4. Finally, Section 5 concludes.
State-level economic indicators are available at higher frequency and are published in a more timely fashion than CO2 emissions or energy consumption. Concretely, quarterly real and per-capita personal income (PI) is obtained from the BEA since 1950 and features a publication lag of approximately three months. Monthly electricity consumption (ELEC), computed as total electricity sales to end-users across all U.S. states, is published by the EIA since 1990 with a publication lag of about two months following the end of the reference month. For the analysis, we consider the year-on-year log difference of both variables.
Annual energy consumption (EC) data at the state-level is obtained from the State Energy Data System (SEDS) also produced by the EIA. This dataset, available from 1960 onwards, is the main input to compute the state-level CO2 emissions. In particular, the SEDS collects detailed data on the consumption of coal, natural gas, and petroleum across the different economic sectors. To estimate CO2 emissions, the EIA applies specific energy content and carbon emission factors to each type of consumed fuel. These factors convert the quantity of fuel used into energy produced and corresponding CO2 emissions. The calculations are periodically adjusted to reflect changes in fuel composition and new scientific findings. Regarding timeliness of the data, the publication lag of energy consumption is approximately 18 months, considerably shorter than that for CO2 emissions. As with CO2 emissions, we focus on the growth rate of per-capita energy consumption.
Our primary variable of interest is state-level energy-related CO2 emissions in the U.S. Data for this variable, sourced from the U.S. EIA, are available annually starting from 1970. Total state CO2 emissions aggregates emissions from direct fuel use across all sectors, including residential, commercial, industrial, and transportation, as well as from primary fuels consumed for electricity generation. The panel consists of N=51𝑁51N=51italic_N = 51 units, which include the 50 states and the District of Columbia. The publication delay for CO2 emissions data is approximately two years and three months after the end of the reference year, a notably longer lag compared to other state-level economic data. Our analysis focuses on nowcasting the growth rate of per-capita CO2 emissions.
The application of our nowcasting methodology to the domains of energy consumption and CO2 emissions is highly pertinent, particularly due to the significant delays in the publication of official data. The publication delay for CO2 emissions data extends to approximately two years and three months after the end of the reference year, while the delay for energy consumption data is around 18 months. Our methodology leverages the more prompt availability of economic data to provide early insights to policymakers on the evolution of critical environmental variables. As the year progresses and more data becomes available, the accuracy of our predictions improves. This approach enables a timely and precise tracking of anthropogenic CO2 emissions at both national and sub-national levels, which is crucial for the development of effective climate policies and for meeting long-term international commitments to combat climate change.
C
In the Eurostat database, economic values of agricultural production are stratified into eleven classes of economic size with increasing values of the standard output. In Table 1 we report the number of farms (thousands) by classes of economic size for the whole European Union (28 countries) from 2010 to 2020.
In the Eurostat database, economic values of agricultural production are stratified into eleven classes of economic size with increasing values of the standard output. In Table 1 we report the number of farms (thousands) by classes of economic size for the whole European Union (28 countries) from 2010 to 2020.
Table 3 provides a comprehensive overview of the main results, distinguishing between the years 2010 and 2020 and showing the parameter estimates, the standard errors and the LR test for both the SAR model with the full-sample (pooled model in Table 3) and the SCASAR model for each cluster. The clusters are shown in Figure 2, for the year 2010, and in Figure 3, for the year 2020.
Δ2020−2010subscriptΔ20202010\Delta_{2020-2010}roman_Δ start_POSTSUBSCRIPT 2020 - 2010 end_POSTSUBSCRIPT
Table 1: Number of farm holdings (thousands) for the whole European Union (28 countries) by economic size classes. The column Δ2020−2010subscriptΔ20202010\Delta_{2020-2010}roman_Δ start_POSTSUBSCRIPT 2020 - 2010 end_POSTSUBSCRIPT reports the observed raw variation from 2010 to 2020.
D
In a similar vein, Čopič and Ponsatí (2008) study robust prior-independent mechanisms when the buyer’s and seller’s valuations are discounted over time and hence both agents are eager to have the trade occur as soon as possible. In this setting, the mediator keeps the reported valuations of the buyer and the seller privately while trade is incompatible. Then, after trade becomes compatible, the mediator discloses the agreement and trade occurs at the agreed price.
We emphasize that none of the literature has studied the approximation ratio for social welfare or gains-from-trade in the standard bilateral trade with broker model introduced by Myerson and Satterthwaite (1983).
Correspondingly, a long line of work (McAfee, 2008; Blumrosen and Dobzinski, 2014, 2016; Blumrosen and Mizrahi, 2016) has studied the best possible approximate efficiency, in particular for the notion of gains-from-trade (GFT),222Gains-from-trade is defined by the expected marginal increase of the social welfare. Thus, it is typically harder to approximate than social welfare. We refer to Section 2 for more details. with respect to those desideratum.
The problem of bilateral trade, introduced by Myerson and Satterthwaite (1983), has been a cornerstone in mechanism design and algorithmic game theory in the past few decades.
In comparison to the significant developments in bilateral trade, bilateral trade with a broker has been less studied, despite its significance and even its introduction in the same seminal paper by Myerson and Satterthwaite (1983). In terms of prior work on this problem, Myerson and Satterthwaite (1983) characterize one BNIC and IR mechanism that maximizes the broker’s expected profit.
A
We compare two policies regarding the disclosure of lottery information. Under the revealing policy, students are informed of their lottery number before submitting ROLs,444In our model, we do not require that students receive any information about the others’ lottery numbers. But in practice, students can receive useful statistics about the others’ lottery numbers. while under the covering policy, students remain uninformed about their lottery outcome before submitting ROLs. Figure 1 shows the timeline of the school choice game under the revealing policy: students’ utilities are first realized; then, the lottery is drawn and revealed to the students, each learning only his own lottery number; students then submit their ROLs; finally, the matching is generated.
We compare two policies regarding the disclosure of lottery information. Under the revealing policy, students are informed of their lottery number before submitting ROLs,444In our model, we do not require that students receive any information about the others’ lottery numbers. But in practice, students can receive useful statistics about the others’ lottery numbers. while under the covering policy, students remain uninformed about their lottery outcome before submitting ROLs. Figure 1 shows the timeline of the school choice game under the revealing policy: students’ utilities are first realized; then, the lottery is drawn and revealed to the students, each learning only his own lottery number; students then submit their ROLs; finally, the matching is generated.
Our first model deviates from Abdulkadiroglu, Che and Yasuda (2011) by imposing a restriction on the number of schools that students may report. Therefore, students need to decide which schools to rank. The revealing policy effectively resolves uncertainties for students by essentially informing them of their attainable schools via lottery numbers, resulting in a matching outcome equivalent to that produced by DA with complete preference lists. Consequently, every student is matched. From an interim perspective (after utilities are realized but before the lottery is drawn), every student has an equal chance of attending each school. In contrast, under the covering policy, students need to strategize. For certain realized utilities, students may concentrate their preference submissions on a subset of schools, resulting in wasted seats at other schools and leaving some students unmatched. From an interim perspective, every student faces a positive probability of remaining unmatched, and from an ex-ante perspective (before utilities and the lottery are drawn), students receive an equal random assignment that is first-order stochastically dominated by that obtained under the revealing policy. Overall, the first model demonstrates the benefit of the revealing policy in resolving students’ conflicting preferences.
In the revealing treatment, the lottery is first drawn, followed by the announcement of the lottery number to each student prior to her submission of the ROL. In the covering treatment, the lottery is drawn after all students have submitted their ROL.
Our analysis compares the two policies at different stages of the game. Ex-ante refers to the timing before students’ utilities are realized. Interim refers to the timing after students’ utilities are realized but before the lottery is drawn. For individual students, interim also denotes the timing when they only know their own utilities (but do not know the others’ and the lottery outcome). Ex-post refers to the timing after the lottery has been drawn and students have submitted their ROLs.
D
Connectedness from:⁢∑n≠jdn⁢jh.Connectedness from:subscript𝑛𝑗superscriptsubscript𝑑𝑛𝑗ℎ\text{Connectedness from:}\sum_{n\neq j}d_{nj}^{h}.Connectedness from: ∑ start_POSTSUBSCRIPT italic_n ≠ italic_j end_POSTSUBSCRIPT italic_d start_POSTSUBSCRIPT italic_n italic_j end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_h end_POSTSUPERSCRIPT .
This is a measure of how information in state j𝑗jitalic_j impacts the forecast error variances of other states (that is, the summation is over n𝑛nitalic_n). This is called a connectedness to measure.
This is a measure of how information in other states impacts the forecast error variance of region n𝑛nitalic_n (that is, the summation is over j𝑗jitalic_j).
Then we define the total directional connectedness to other regions from region j𝑗jitalic_j at horizon h as:
The remaining parameter is σc⁢s2superscriptsubscript𝜎𝑐𝑠2\sigma_{cs}^{2}italic_σ start_POSTSUBSCRIPT italic_c italic_s end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT, which is the variance of the error in the cross-sectional restriction. This restriction is of importance in obtaining accurate estimates of monthly state GDP, since it the main avenue in which newly released GDP growth figures for the U.S. as a whole spill over into estimates for the individual states. If σc⁢s2superscriptsubscript𝜎𝑐𝑠2\sigma_{cs}^{2}italic_σ start_POSTSUBSCRIPT italic_c italic_s end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT is a small number, then this link between the U.S. and the individual states is strengthened. Larger numbers weaken this link. Of course, this parameter is estimated from the data, but its prior can influence the estimate. We assume:
B
Table 5 displays the point totals for each of the five variations. Compared to the IRV outcome, the BC methods have much more fair results for this election. First, recall that monotonicity and no-show paradoxes cannot occur with these points-based methods as they do with IRV. We also observe that Begich receives the most points with any method, which means there was no Condorcet winner failure, implying no verifiable failures are observed.
We begin a discussion of whether voting failures occurred in particular RCV elections using the BC variations with the previously mentioned 2022 Alaska US House Special Election, as seen in Table 1. A partial ballot of two candidates is displayed with the unlisted candidate in the third ranking to simplify the presentation of the profile, although note that we do consider those ballots as being partial when applying MBC and BCU.
Table 1: The Alaska Special Election for the US House in August 2022. Write-ins are removed and two-candidate ballots have the unlisted candidate included in third rank.
Table 5: The results from the five BC variations on the Alaska Special Election for the US House in August 2022. With three candidates, a full ballot would earn (3,2,1)321(3,2,1)( 3 , 2 , 1 ) points for ABC/MBC/BCU and (4,2,1)421(4,2,1)( 4 , 2 , 1 ) for each of QBC and EBC.
Another observation from our results is that the five BC variations agree on a winner in the vast majority of RCV elections. In particular, 384 of the 421 elections, or about 91%percent9191\%91 %, have the same winning candidate for all five methods. Furthermore, the three variations EBC, QBC, and ABC that all feature the averaged process for handling partial ballots agree in nearly 96%percent\%% of the elections. The cases where the methods disagree are where many of the voting failures arise, but it is notable that the choice of variation is irrelevant for such a high proportion of elections.
C
Global Economic Slowdowns: Various economic crises, driven by the pandemic, inflation, energy price shocks, and geopolitical tensions, have led to slower global growth and deepened inequalities. This has hindered progress in reducing inequality (SDG 10), eradicating poverty (SDG 1), and ensuring decent work and economic growth (SDG 8) [15]. These crises have significantly affected the advancement of member countries [16]. As a result, policymakers have shifted their focus from the long- and medium-term goals of the SDGs to addressing immediate, pressing challenges. The need for fundamental SDG cooperation and clean energy has become more critical than ever in the current global context [17, 18].
In response to this need, NITI Aayog created the SDG India Index, marking the first government-led effort to track SDG progress at the subnational level. Designed to map the progress of each state and union territory, this index not only measures advancement but also fosters cooperative and competitive federalism to encourage further action on the SDGs (Economic Survey, 2021-22). NITI Aayog released the SDG Index annually from 2018 to 2021-22 and most recently in 2023-24, with the latest edition assessing performance across 16 SDGs, ranking states and monitoring their progress over time. SDG 17, focusing on global partnerships, was evaluated qualitatively. Notably, the first edition in 2018 excluded SDGs 12, 13, 14, and 17, leading the Observer Research Foundation (ORF) to create its own SDG Index in 2019, arguing that the omission of SDG 13 (Climate Action) left the original index incomplete. ORF’s index incorporated 14 of the 17 SDGs, providing a more comprehensive view of state progress in climate action and related goals [21, 22].
Concrete plans have been outlined globally to achieve the Sustainable Development Goals (SDGs) set under Agenda 2030, and India is following suit. Although there are various composite indices to measure SDG attainment worldwide, there is a lack of state-level analysis in India, necessitating a complex approach to evaluate each state’s progress. This study addresses this gap by establishing a bipartite network that assesses the SDG status across Indian states and union territories and ranks them based on their contributions and current status concerning the SDGs. Using NITI Aayog’s rankings as a benchmark, the study provides a basis for comparison and highlights that while there is a positive correlation between this study’s index and the NITI Aayog rankings, the relationship remains weak.
The Sustainable Development Goals (SDGs) are inherently complex and multi-faceted, which has led to the development of composite indices that offer more meaningful ways to assess their achievability. At the global level, SDG indices cover all 193 UN member states, with the Sustainable Development Solutions Network (SDSN) providing such indices since 2015. The SDSN’s latest 2022 report continues this initiative, offering a broad view of SDG progress across countries [11]. However, creating SDG indices at the subnational level remains a work in progress, especially in large, diverse nations like India [19, 20]. Given India’s federal structure, it is critical for individual states to actively participate in SDG advancement, necessitating a state-wise SDG index to help identify specific progress and areas needing focus.
With the above context of sustainable development, the Millennium Development Goals (MDGs) were framed in September 2000 during the United Nations Millennium Summit. At this summit, world leaders from 189 countries adopted the United Nations Millennium Declaration. The MDGs consisted of eight global development goals aimed at addressing issues such as poverty, hunger, education, gender equality, child mortality, maternal health, combating diseases, environmental sustainability, and global partnership, with the initial target for achieving these goals set as 2015. The idea of the Sustainable Development Goals (SDGs) emerged from the need to build on the progress and address the limitations of the Millennium Development Goals (MDGs), which were set to expire in 2015. The concept of SDGs was first proposed at the United Nations Conference on Sustainable Development, known as the Rio+20 Summit, held in Rio de Janeiro, Brazil, in June 2012. At the Rio+20 Summit, 193 member states recognized that development must be sustainable, balancing social, economic, and environmental aspects. They produced the outcome document, titled “The Future We Want”, which called for the establishment of a set of universal goals that would address global challenges more holistically. These new goals were intended to continue the work of the MDGs but expand their scope to include environmental sustainability, economic inequality, and social inclusivity, ensuring that no one is left behind [11]. The United Nations formally adopted the Sustainable Development Goals (SDGs) in September 2015 during the UN Sustainable Development Summit in New York, with a global commitment to achieve these 17 goals by 2030. The concept of the SDGs emerged after several years of negotiations involving policymakers, NGOs, and experts. The primary goal was to establish a set of targets for member nations to address economic, political, and environmental challenges [12]. Each of the 17 SDGs is accompanied by multiple subgoals, all designed to benefit both people and the planet. The SDGs aim to eliminate poverty, promote a better future for all, protect the environment, and ensure a dignified life for individuals in the member countries [13, 14].
C
A continuous latent distribution is not considered by most of the proposed ordinal inequality indexes or the median-preserving spread of Allison and
For latent between-group inequality inference, we provide an “inner” confidence set for the true set of quantiles at which the first latent distribution is better than the second latent distribution.
Otherwise, it is impossible to distinguish whether a difference in the ordinal distribution is due to a change in the latent distribution or a change in the thresholds.
Foster (2004), which Madden (2014) calls “the breakthrough in analyzing inequality with [ordinal] data” (p. 206); often no latent interpretation is provided, or else the “latent” distribution merely allows “rescaling” by changing the cardinal value assigned to each category.
For example, we want to learn about the latent health distribution, but individuals only report ordinal categories like “poor” or “good,” which each include a range of latent values between the corresponding thresholds.
C
A Monte Carlo exercise substantiates our theoretical results.444We consider three different population models. The outcomes are generated to be either fractional, non-negative, or continuous with an unrestricted support. The results for fractional and non-negative outcomes can be found in section B of the online appendix. We find that RA estimators, both linear and nonlinear, improve over subsample means in terms of standard deviations and have small biases, even with fairly small sample sizes. Not surprisingly, the magnitude of the precision gains with RA methods generally depends on how well the covariates predict the potential outcomes. Finally, we conclude with an empirical application that uses the RA methods to estimate the lower bound mean willingness to pay for an oil spill prevention program along California’s coast.
In addition, we also extend the nonlinear RA results in NW (2021) to multiple treatment levels. We show that nonlinear RA of the QML variety is consistent if one chooses the conditional mean and objective functions appropriately from the linear exponential family of quasi-likelihoods. Furthermore, we also characterize the semiparametric efficiency bound for estimating the vector of PO means and show that SRA attains this bound, when the conditional means are correctly specified.
In terms of contribution, this paper provides a unified framework for studying regression adjustment in experiments, allowing for multiple treatments under the infinite population (or superpopulation) setting. Besides studying linear and nonlinear RA methods and making efficiency comparisons, we also characterize the semiparametric efficiency bound for all consistent and asymptotically linear estimators of the PO means and show that separate RA attains this bound under correct conditional mean specification. Because our results compare asymptotic variance matrices for different PO mean estimators, the efficiency results derived here extend to linear and (smooth) nonlinear functions of the PO means. We also fill an important gap in the regression adjustment literature by studying pooled nonlinear methods which have not been discussed elsewhere. We show how nonlinear pooled RA is consistent whenever nonlinear separate RA is, making it an important alternative when the researcher lacks sufficient degrees of freedom for estimating separate slopes. On the practical side, the proposed RA estimators are easy to implement with common statistical software like Stata (see online appendix D for reference).
In this section, we generalize the efficiency result with regression adjustment in experiments for multiple treatments by providing an efficiency benchmark for estimating the vector of PO means. We characterize the efficient influence function and the semiparametric efficiency bound (SEB) for all regular and asymptotically linear estimators of 𝝁𝝁\bm{\mu}bold_italic_μ. The SEB is useful as it provides the semiparametric analogue of the Cramer-Rao lower bound for parametric models. It gives us a standard against which to compare the asymptotic variance of any regular estimator of the potential outcome means, including the RA estimators studied in the previous sections.
et al. (2008). We further expand our set of results to include the semiparametric efficiency bound for all CAN estimators of the PO means and establish that SRA of the QML variety attains the efficiency bound when the conditional means are correctly specified. Another paper that studies nonlinear RA for estimating the ATE is Rosenblum and Van
B
Sections 4 and 5 provide theoretical justifications for our procedure. Section 4 establishes that our test is asymptotically valid and—whenever the selection rule satisfies a condition we call improvement convergence—it is also consistent. These results build on a bootstrap consistency lemma that we develop to handle two specific features of our setting: First, our test statistic is constructed using absolute values, and it is well-known that bootstrap consistency fails at points of non-differentiability (see e.g., Fang and Santos, 2019); second, the fact that we impose very few restrictions on the selection rule means that the distribution of our test statistics may vary with the sample size and are not guaranteed to “settle down” in the limit. Since (to the best of our knowledge) generally available bootstrap consistency results do not cover our exact setting, we develop a new result building on the work of Mammen (1992) (see Appendix B.2 for details).
Suppose a firm’s algorithm is the subject of a potential disparate impact case. We consider a game between two players: an analyst auditing the firm and a policymaker. There is a fixed statistical test of exact size α∈(0,1)𝛼01\alpha\in(0,1)italic_α ∈ ( 0 , 1 ), which produces a p𝑝pitalic_p-value from any given train and test split. The policymaker chooses between two procedures s1subscript𝑠1s_{1}italic_s start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and s2subscript𝑠2s_{2}italic_s start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT based on this test. The first corresponds to a single train-test split, where the null of no fairness-improvability is rejected if the resulting p𝑝pitalic_p-value is less than α𝛼\alphaitalic_α. The second corresponds to K𝐾Kitalic_K train-test splits, where the null is rejected if the median p𝑝pitalic_p-value across these splits is less than α/2𝛼2\alpha/2italic_α / 2 (i.e., our proposed method). The analyst observes which procedure is chosen, and repeats that procedure m∈ℤ+𝑚subscriptℤm\in\mathbb{Z}_{+}italic_m ∈ blackboard_Z start_POSTSUBSCRIPT + end_POSTSUBSCRIPT times at a cost of c1⁢(m)subscript𝑐1𝑚c_{1}(m)italic_c start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( italic_m ) for procedure 1 or c2⁢(m)subscript𝑐2𝑚c_{2}(m)italic_c start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( italic_m ) for procedure 2, where c1subscript𝑐1c_{1}italic_c start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and c2subscript𝑐2c_{2}italic_c start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT are increasing and weakly convex functions. (For example, the specifications c1⁢(m)=γ⁢msubscript𝑐1𝑚𝛾𝑚c_{1}(m)=\gamma mitalic_c start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( italic_m ) = italic_γ italic_m and c2⁢(m)=γ⁢K⁢msubscript𝑐2𝑚𝛾𝐾𝑚c_{2}(m)=\gamma Kmitalic_c start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( italic_m ) = italic_γ italic_K italic_m would give the analyst a constant cost γ𝛾\gammaitalic_γ for each train-test split.) The analyst reports the p𝑝pitalic_p-value from one of these repetitions, and this reported p𝑝pitalic_p-value determines whether the null is rejected. Crucially, we suppose that the hypothesis test is interpreted as if m=1𝑚1m=1italic_m = 1, which is the standard convention.242424This distinguishes our model from related works such as Henry (2009), Felgenhauer and Loerke (2017), Henry and Ottaviani (2019), and Herresthal (2022), in which an agent gathers information through hidden testing or experimentation, and subsequently chooses whether to disclose his findings. In those papers, the principal or sender (the equivalent of our “policymaker”) updates his beliefs about an unknown payoff-relevant state given the agent’s report and equilibrium strategy. In our model, payoffs are instead directly determined by whether the null is rejected at the reported p𝑝pitalic_p-value.
In this section, we thus formalize the intuition that repeated sample-splitting provides stronger safeguards against manipulation than single splits, thereby providing a theoretical justification for this methodological choice. Our microfoundation is a game between a policymaker who sets the statistical procedure and an analyst who can (secretly) repeat the procedure multiple times. Within this framework, we define what it means for a test to be robust to manipulation. Section 5.1 presents this model, and Section 5.2 proves that repeated sample-splitting is more robust to manipulation than single train-test splits. While our motivation comes from disparate impact testing, these results apply to any valid test, making them relevant beyond our specific setting.
Section 5 provides a game-theoretic framework to explain how our use of repeated sample-splitting leads to a test that is more robust to manipulation. We study a game between an analyst and a policymaker, where the policymaker chooses the statistical procedure, and the analyst chooses a number of times to repeat this procedure. The analyst then selects one p𝑝pitalic_p-value to report, which determines if the null hypothesis is rejected. We suppose that the analyst would like to reject the null even when it holds, and define a test to be more robust to manipulation than another if it leads to a lower probability of (incorrect) rejection under the null. We prove that aggregating p𝑝pitalic_p-values is indeed more robust to manipulation than using a single split. Although this microfoundation is particularly relevant in disparate impact testing, where the analyst may be incentivized to reject the null hypothesis even when it is true, our formal results hold for any valid test and thus apply more broadly.
We are interested in settings where the analyst would like to conclude that the algorithm is fairness-improvable even when it is not. With this kind of manipulation in mind, we condition on the state of the world in which the null hypothesis holds. The analyst’s payoff is 1−ci⁢(m)1subscript𝑐𝑖𝑚1-c_{i}(m)1 - italic_c start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_m ) if the null is rejected under procedure sisubscript𝑠𝑖s_{i}italic_s start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT and −ci⁢(m)subscript𝑐𝑖𝑚-c_{i}(m)- italic_c start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_m ) otherwise, while the policymaker’s payoff is 0 if the null is rejected and 1 otherwise.
C
\right]=\lim_{\delta\uparrow 1}(1-\delta)\sum_{t=0}^{\infty}\delta^{t}x_{t}.\hfill\qeditalic_x start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT = roman_lim start_POSTSUBSCRIPT italic_δ ↑ 1 end_POSTSUBSCRIPT ∑ start_POSTSUBSCRIPT italic_t = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT italic_δ start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT ( italic_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT - italic_x start_POSTSUBSCRIPT italic_t - 1 end_POSTSUBSCRIPT ) = roman_lim start_POSTSUBSCRIPT italic_δ ↑ 1 end_POSTSUBSCRIPT [ ∑ start_POSTSUBSCRIPT italic_t = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT ( italic_δ start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT - italic_δ start_POSTSUPERSCRIPT italic_t + 1 end_POSTSUPERSCRIPT ) italic_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ] = roman_lim start_POSTSUBSCRIPT italic_δ ↑ 1 end_POSTSUBSCRIPT ( 1 - italic_δ ) ∑ start_POSTSUBSCRIPT italic_t = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT italic_δ start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT italic_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT . italic_∎
This section studies Blackwell equilibria under perfect monitoring. First, we derive necessary conditions that such equilibria must satisfy. This leads to a modified notion of minmax payoff, and to a folk theorem relative to this notion.
In light of this, if we can construct a Blackwell SPNE whose on-path play yields an undiscounted average payoff of v𝑣vitalic_v, then v𝑣vitalic_v is a limit Blackwell payoff. This leads to the proof of the folk theorem.
The proof proceeds as that of Theorem 1. The main difference is that on-path play yields payoffs that converge to the target payoff. To this end, we construct sequences of pure actions that approximate the target payoff, yet preserve individual rationality for low discounting.
We say that a strategy profile is a Blackwell SPNE if it is a Blackwell SPNE above some δ¯<1¯𝛿1\underline{\delta}<1under¯ start_ARG italic_δ end_ARG < 1; similarly, we refer to a Blackwell SPNE payoff.
B
However, when the bidders incur time costs, the payoff equivalence principle no longer holds. In this more interesting case, we provide comparative static analyses based on the local properties of the auctioneer utility function in our time-costly setting.
The first-order derivative of the expected utility for the auctioneer in Istanbul Flower Auction is given by
Recall that E⁢UAD=E⁢UAF⁢(1)𝐸superscriptsubscript𝑈𝐴𝐷𝐸superscriptsubscript𝑈𝐴𝐹1EU_{A}^{D}=EU_{A}^{F}(1)italic_E italic_U start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_D end_POSTSUPERSCRIPT = italic_E italic_U start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_F end_POSTSUPERSCRIPT ( 1 ). Hence, to demonstrate that the Istanbul Flower Auction is preferred by the auctioneer to the Dutch auction, it is sufficient to show that the expected utility of the auctioneer is decreasing at s=1𝑠1s=1italic_s = 1 (or the derivative of the expected utility at s=1𝑠1s=1italic_s = 1 is negative), since then it must reach a higher utility for some interior s𝑠sitalic_s. Similarly, since E⁢UAE=E⁢UAF⁢(0)𝐸superscriptsubscript𝑈𝐴𝐸𝐸superscriptsubscript𝑈𝐴𝐹0EU_{A}^{E}=EU_{A}^{F}(0)italic_E italic_U start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_E end_POSTSUPERSCRIPT = italic_E italic_U start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_F end_POSTSUPERSCRIPT ( 0 ), the Istanbul Flower Auction is better than the English auction if the expected utility for the auctioneer is increasing at s=0𝑠0s=0italic_s = 0 (or the derivative of the expected utility at s=0𝑠0s=0italic_s = 0 is positive). That is, the payoff comparison between the auction formats follows naturally from establishing the sign of the derivative of the auctioneer’s expected utility with respect to the starting price at the corresponding boundary values.
When s=1𝑠1s=1italic_s = 1, as in the Dutch auction, the price continuously descends from 1111 until the first bidder stops the clock and wins the item. In this case, the ex-ante expected utilities for the auctioneer and the bidders, the social welfare and the expected duration are given by
Now consider the other lines in Figure 2 and Figure 3, which display the auctioneer’s relative benefit of the Istanbul Flower Auction over the Dutch auction when the starting price is set to maximize the expected utility for the bidders (green lines), the expected social welfare (yellow lines) or to minimize the expected duration of the auction (red lines). We observe that the auctioneer can still significantly benefit from the Istanbul Flower Auction when bidders are very impatient. In contrast, the auctioneer can be slightly worse off (no more than 3%percent33\%3 %) in the Istanbul Flower Auction than in the Dutch auction when bidders are relatively patient. This suggests that the significant advantage of the Istanbul Flower Auction over the Dutch Auction is robust to alternative optimization targets. The auctioneer incurs very little loss and typically benefits a lot from the Istanbul Flower Auction compared to the Dutch auction when she prioritizes the satisfaction of bidders, the social welfare, or the speedy sale.
A
(0.1;0.01;5;5;103)0.10.0155superscript103(0.1;0.01;5;5;10^{3})( 0.1 ; 0.01 ; 5 ; 5 ; 10 start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT )
(0.1;0.01;5;5;103)0.10.0155superscript103(0.1;0.01;5;5;10^{3})( 0.1 ; 0.01 ; 5 ; 5 ; 10 start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT )
(0.01;0.01;50;5;103)0.010.01505superscript103(0.01;0.01;50;5;10^{3})( 0.01 ; 0.01 ; 50 ; 5 ; 10 start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT )
(0.01;5;103)0.015superscript103(0.01;5;10^{3})( 0.01 ; 5 ; 10 start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT )
(0.01;0.1;5;5;103)0.010.155superscript103(0.01;0.1;5;5;10^{3})( 0.01 ; 0.1 ; 5 ; 5 ; 10 start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT )
D
As it is clear from this review of the literature, the core of the statistical results discussed here should not come as a surprise. However, their implication in the context of testing for equal predictive ability remains relevant, in particular when good forecasts are compared against poor benchmarks, with relevant autocorrelations, as our results suggest that in these cases the blind application of the DM test leads to incorrect conclusions. Even more importantly, it may be more difficult to reject the null hypothesis when good forecasts are compared to poor competitors, than when the same good forecasts are compared to competitors that are nearly as good. This perverse and undesirable feature of the DM test should be kept in mind in forecast evaluation exercises.
In this paper, we study the performance of the DM test when the assumption of weak autocorrelation of the loss differential does not hold. We characterise strong dependence as local to unity as in \citeasnounPhillips1987LocUnity and \citeasnounPhillipsMagd2007MildlyIntegrated. This definition is at odds with the more popular characterisation in the literature that treats strong autocorrelation and long memory as synonyms. Local to unity, however, seems well suited to derive reliable guidance when the sample is not very large, as it is the case in many applications in economics. With this definition the strength of the dependence is determined also by the sample size: a stationary AR(1) process with root close to unity may be treated as weakly dependent in a very large sample, but standard asymptotics may be a poor guidance for cases with smaller samples, and local to unity asymptotics may be more informative. We show that the power of the DM test decreases as the dependence increases, making it more difficult to obtain statistically significant evidence of superior predictive ability against less accurate benchmarks. We also find that after a certain threshold the test has no power and the correct null hypothesis is spuriously rejected. \CopyR2c7tThese results caution us to seriously consider the dependence properties of the loss differential before the application of the DM test, especially when naive benchmarks are considered. In this respect, a unit root test could be a valuable diagnostic for the preliminary detection of critical situations.
analysing the properties of the realised forecast losses reported in Table 4. The average realised losses associated to the AR(1) forecast are lower, at least for forecasts up to six quarters, and less dispersed than the ones of the two benchmarks, so they are, in this sense, more precise. Moreover, the losses from the AR(1) predictions are not very correlated for short forecasting horizons. As we increase the forecasting horizon the dependence increases, but the autocorrelations still decay reasonably quickly. On the other hand, the two benchmarks display large and persistent autocorrelations in their realised forecast losses at all forecasting horizons. We further investigate the dependence in the realised losses using the ADF test: the difference in the persistence that we observed in the sample autocorrelations of the realised losses is confirmed by the outcome of the ADF test, where the unit root hypothesis is rejected only for the forecasts from the AR(1) model (and only for short horizons).
In this section, we investigate the properties of the DM statistic in the neighbourhood of unity in a Monte Carlo exercise. We consider the DGP
The paper is organised as follows. We formally introduce the DM test in Section 2, and derive the limit properties of the DM statistics in presence of dependence in Section 3. We investigate the practical implication of our theoretical findings in a Monte Carlo exercise (Section 4) and in the empirical application (Section 5). Details on the assumptions of the DGP and formal derivations are in the Appendix.
D
This task utilizes data from files such as Sales Order.csv and Production.csv to model demand dynamics for specific nodes, for example, SOS008L02P. The input data typically includes recent time-series observations, while the outputs predict demand for future time intervals. Techniques such as sliding window methods are employed to extract relevant historical features, which are then fed into machine learning models or statistical algorithms. As a pivotal component of supply chain analytics, this forecasting task enhances the performance of individual nodes and contributes to the overall efficiency and resilience of the supply chain system.
Performance on Complex Relationships: Despite the absence of explicit connections between nodes, the multi-layer architecture captures complex, non-linear relationships effectively. This demonstrates the model’s ability to uncover meaningful patterns in datasets where dependencies are not explicitly encoded.
Hidden Layers: Each hidden layer applies a linear transformation followed by a non-linear activation function, enabling the network to model complex patterns. The transformation at each layer l𝑙litalic_l can be expressed as:
A Multilayer Perceptron (MLP) is a type of artificial neural network designed as a feedforward network, meaning that information flows in one direction—from the input layer, through one or more hidden layers, and finally to the output layer. Each layer is fully connected, with every neuron (node) in one layer connected to every neuron in the next. The MLP is a powerful model capable of learning and representing complex non-linear relationships in data through its multi-layered architecture and non-linear activation functions.
Output Layer: This layer produces the network’s final output, with a task-specific activation function such as softmax for classification or linear for regression tasks.
C
The mixing matrix A𝐴Aitalic_A (or, equivalently, the structural parameter matrix ΛΛ\Lambdaroman_Λ) is said to be identified up to a column permutation and scaling if A~=A⁢D⁢P~𝐴𝐴𝐷𝑃\widetilde{A}=ADPover~ start_ARG italic_A end_ARG = italic_A italic_D italic_P (or equivalently Λ~=P⁢D⁢Λ~Λ𝑃𝐷Λ\widetilde{\Lambda}=PD\Lambdaover~ start_ARG roman_Λ end_ARG = italic_P italic_D roman_Λ) can be identified, where D𝐷Ditalic_D is a diagonal matrix and P𝑃Pitalic_P is a permutation matrix.
Scale indeterminacy is a routine and generally minor issue in linear models; it is typically addressed by normalizing the diagonal of ΛΛ\Lambdaroman_Λ to unity. While this normalization resolves scale indeterminacy, the situation becomes more intricate when permutation indeterminacy is present. Overcoming permutation indeterminacy is often the principal hurdle between the aforementioned identification result and point identification. Fortunately, economic theory can be instrumental in pinpointing the correct permutation. Once an economically meaningful ordering or “labeling” is established, a diagonal normalization can then be used to eliminate scale indeterminacy.
Although resolving permutation indeterminacy is not the central focus of this paper, we outline two practical examples of how one might obtain point identification in structural models, guided by economic reasoning:
Assumption A1) is standard, indicating that the observable variables 𝐗𝐗\mathbf{X}bold_X can be written as a linear combination of the structural errors 𝐒𝐒\mathbf{S}bold_S via the matrix inverse Λ−1=AsuperscriptΛ1𝐴\Lambda^{-1}=Aroman_Λ start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT = italic_A. The matrix A𝐴Aitalic_A is commonly referred to as the mixing matrix. While this assumption is quite natural in simultaneous equation models, it is less convenient in factor model settings where the factor loading matrix is typically not square. In Section 5, we show how to relax this assumption to require only that the columns of ΛΛ\Lambdaroman_Λ be linearly independent.
Throughout this paper, we focus on settings similar to A * 4 and A * 5, where economic-theory-based procedures remove both scale and permutation indeterminacies and satisfy the consistent labeling criterion. Consequently, our main goal is to show that under Assumptions A1)–A3), A𝐴Aitalic_A is identified up to column permutation and scaling and a consistent and asymptotically normal estimator of A~~𝐴\tilde{A}over~ start_ARG italic_A end_ARG and Λ~~Λ\tilde{\Lambda}over~ start_ARG roman_Λ end_ARG can be constructed.
A
The combination of reduced hallucinations, currency of information, transparency, and customizability makes Search + RAG a robust and flexible tool for creating accurate and comprehensive cultural profiles, which serve as the foundation for our Synthetic Cultural Agents (SCAs).
In constructing the cultural profiles, we considered key socio-cultural-economic factors including lifestyle, cultural practices, economic systems, political ideologies, social organization, kinship structures, and core values [malinowski1922, geertz1973]. These comprehensive profiles inform the behavior of our SCAs, which are then prompted to participate in established economic experiments such as the dictator game, ultimatum game, and tests for the endowment effect.
Our approach presents a new methodology to start addressing these challenges by creating Synthetic Cultural Agents (SCAs), LLM-based models that represent specific cultural profiles. Using a combination of web scraping, LLMs, and retrieval-augmented generation (RAG) prompting, we construct cultural profiles for six small-scale societies: the Hadza, the Machiguenga, the Tsimané, the Aché, the Orma, and the Yanomami. We then use these profiles to instantiate LLM agents and subject them to three classic economic experiments: the dictator game [forsythe1994fairness], the ultimatum game [guth1982experimental], and the endowment effect [Kahneman1991].
Once we have generated comprehensive cultural profiles using the Search + RAG methodology, we use these profiles to instantiate Synthetic Cultural Agents (SCAs) capable of participating in economic experiments. This process involves two key steps: (1) instantiating the SCA with the cultural profile, and (2) subjecting the SCA to experimental tasks.
Our methodology for creating and experimenting with Synthetic Cultural Agents (SCAs) consists of three key steps: (1) Building a Knowledge Base, (2) Constructing a Cultural Profile, and (3) Running Experiments. This process, illustrated in Figure 1, allows for the creation of SCAs that can represent specific small-scale societies in experimental settings and participate in economic decision-making tasks.
C
These examples demonstrate the diversity in the structure and complexity of knowledge graphs across different types of economic research. They illustrate how our measures capture key aspects of the narratives, such as the breadth of topics covered, the depth of causal analysis, and the interconnectedness of concepts.
Taken together, these measures of novelty and gap filling enrich our understanding of how each paper pushes the frontier of economic knowledge. By systematically tracking new edges, new paths, new subgraphs, and underexplored concept pairs, we capture distinct facets of a paper’s originality and the extent to which it addresses previously overlooked topics or mechanisms. As shown in Section 6, these indicators prove to be significant predictors of both publication outcomes and the long-run citation impact of research.
Thus far, our measures of narrative complexity focus on the internal structure of each paper’s knowledge graph, such as the number of edges, unique paths, and longest causal chains. We now introduce a complementary set of measures capturing how each paper pushes the frontier of economic research by contributing new concepts, links, or underexplored intersections. These measures fall into four categories: novel edges, path-based novelty, subgraph-based novelty, and gap filling. Each can be computed for the non-causal subgraph or restricted to the causal subgraph. In the latter case, these measures capture novelty or gap filling specifically within the subset of claims that are supported by causal inference methods.
To systematically evaluate these knowledge graphs, we develop three broad categories of measures. First, we track the narrative complexity of a paper, including the breadth and depth of claims. Second, we examine novelty and contribution, capturing whether a paper’s relationships are genuinely new or whether it “fills gaps” previously underexplored in the literature, distinguishing between causal and non-causal contributions. Third, we consider conceptual importance and diversity, focusing on how centrally a paper’s concepts sit within the overall network of economic ideas and whether the paper balances multiple causes (sources) with multiple outcomes (sinks). Additionally, by distinguishing these measures based on claims in the non-causal subgraph from those in the causal subgraph, we parse out the difference between general narrative features and those supported by rigorous identification.
In this section, we define quantitative measures that characterize the structure and content of each paper’s knowledge graph. These measures are grouped into three categories, each capturing distinct aspects of how research is organized and perceived by readers and editors. Measures of narrative complexity assess the internal structure of each paper’s claims; novelty and contribution compare a paper’s ideas to the existing literature, identifying new concepts or intersections; and conceptual importance and diversity situate a paper’s nodes within the broader knowledge graph of economic research.262626These categories are not exhaustive. Future work could explore additional dimensions, such as methodological rigor or theoretical integration, while refining the distinction between measures that are internal to a paper’s claims (e.g., narrative complexity) and those that are relative to the broader literature (e.g., novelty, centrality). This taxonomy serves as a starting point rather than a definitive framework. Together, these measures set the stage for the empirical analysis in Section 6, where we link them to publication outcomes and citation impacts. This section examines how these measures vary across fields and over time, presenting descriptive trends and cross-sectional comparisons to motivate their relevance for understanding academic reception.
B
Note that we designed the simulations so that the total quantities under Nash remain constant across all degrees of (a)symmetry, resulting in a horizontal line at value 1111 in Figure 2, panel (a), for the Nash equilibrium. The same is true for the alternating monopoly. Again, the trajectory of the simulation results is predicted best by equal relative gains and Kalai-Smorodinsky (with Nash deviation profits). Thus, these bargaining solutions do not only provide a best fit in terms of the overall (level) prediction accuracy but also a reasonably good fit for the relative change.
Overall, concepts implicitly embedding a bargaining idea such as equal relative gains and Kalai-Smorodinsky provide a strong explanation for the outcomes generated by algorithms, despite underestimating total quantity. Similarly, while the static Nash equilibrium underestimates the impact of asymmetry on total quantity, it overestimates the effect on profits. These opposing biases nearly offset each other, resulting in relatively accurate predictions for total welfare
Note that we designed the simulations so that the total quantities under Nash remain constant across all degrees of (a)symmetry, resulting in a horizontal line at value 1111 in Figure 2, panel (a), for the Nash equilibrium. The same is true for the alternating monopoly. Again, the trajectory of the simulation results is predicted best by equal relative gains and Kalai-Smorodinsky (with Nash deviation profits). Thus, these bargaining solutions do not only provide a best fit in terms of the overall (level) prediction accuracy but also a reasonably good fit for the relative change.
Total profits are predicted spot on by the equal relative gains solutions and Kalai-Smorodinsky with Nash disagreement profits for all degrees of asymmetry. Consequently, the average squared distance in Table 3 is small. In particular, they perform remarkably better than predicted by the oligopoly benchmarks. Equal relative gains and Kalai-Smorodinsky also provide relatively good predictions for total welfare.
For total profits, equal relative gains and Kalai-Smorodinsky perfectly capture the comparative statics of asymmetry perfectly. Similarly, the effect of asymmetry on total welfare is captured extremely well by the Nash prediction, as well as the bargaining concepts of equal relative gains and Kalai-Smorodinsky (with Nash). However, the good fit is masked by slight over- and underestimation of consumer surplus and total profits, respectively. In particular, the Nash prediction underestimates the effect on consumer surplus and overestimates the impact on total profits, while it is the opposite for the bargaining solutions.
D
README.md exists but content is empty.
Downloads last month
-