diff --git "a/0NAyT4oBgHgl3EQf0_na/content/tmp_files/load_file.txt" "b/0NAyT4oBgHgl3EQf0_na/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/0NAyT4oBgHgl3EQf0_na/content/tmp_files/load_file.txt" @@ -0,0 +1,616 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf,len=615 +page_content='A Closed-Form EVSI Expression for a Multinomial Data-Generating Process Adam Fleischhacker∗, Pak-Wing Fok†, Mokshay Madiman‡, Nan Wu§ January 3, 2023 Abstract This paper derives analytic expressions for the expected value of sample information (EVSI), the expected value of distribution informa- tion (EVDI), and the optimal sample size when data consists of inde- pendent draws from a bounded sequence of integers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Due to challenges of creating tractable EVSI expressions, most existing work valuing data does so in one of three ways: 1) analytically through closed-form ex- pressions on the upper bound of the value of data, 2) calculating the expected value of data using numerical comparisons of decisions made using simulated data to optimal decisions where the underlying data distribution is known, or 3) using variance reduction as proxy for the uncertainty reduction that accompanies more data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' For the very flex- ible case of modelling integer-valued observations using a multinomial data-generating process with Dirichlet prior, this paper develops ex- pressions that 1) generalize existing beta-Binomial computations, 2) do not require prior knowledge of some underlying “true” distribution, and 3) can be computed prior to the collection of any sample data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' 1 Introduction The seminal work of [34] introduced preposterior analysis, a Bayesian recipe for estimating the value of information (VOI) prior to knowing the informa- ∗Department of Business Administration, University of Delaware, Newark, DE 19716, email: ajf@udel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='edu †Department of Mathematical Sciences, University of Delaware, Newark, DE 19716, email: pakwing@udel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='edu ‡Department of Mathematical Sciences, University of Delaware, Newark, DE 19716, email: madiman@udel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='edu §Institute for Financial Services Analytics, University of Delaware, Newark, DE 19716, email: nanw@udel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='edu 1 arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='00729v1 [stat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='ME] 2 Dec 2022 tion’s content.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' The expected value of sample information (EVSI), a particu- larly valuable VOI computation, values the information contained in sample observations prior to their collection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' [34] include many closed-form and oft- used expressions for calculating EVSI under the assumption of quadratic loss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' One such expression is for a Bernoulli data-generating process with beta prior distribution (a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' a beta-Binomial model);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' each observation being either zero or one [34, Table 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='2, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' 191].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' In this paper, we gener- alize the beta-binomial EVSI expression beyond binary-valued observations to the case where each data point is drawn from a bounded sequence of integers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' These results expand the availability of tractable VOI expressions to a useful scenario where previously value could only be approximated or bounded when a closed-form expression was needed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Depending on a modeler’s choices of actions, states of uncertainty, loss (or utility) functions, and probability models, tractable calculations of VOI may exist, but intractable formulations, especially for EVSI, are much more common.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' In fact, reputed statistician Dennis Lindley has remarked that the question of sample size “is embarrassingly difficult to answer” due to difficulties calculating EVSI [26].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' More generally, [14] shows that simply characterizing the relationship between information and value is challeng- ing;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' [14]’s work dispels the idea that information value will reliably exhibit monotonic relationships with information value determinants such as action flexibility, risk aversion, or a decision maker’s wealth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' While for some EVSI and VOI problems, closed-form solutions are at- tainable [34, 5, 4], value of information solutions are often difficult to for- mulate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Hence, many papers are known for their ability to characterize aspects of VOI expressions such as the distributional properties of the ex- pected value of perfect information (EVPI) [28], the impact of an exogenous variable on EVPI [20], and the additivity of information value when multi- ple sources of uncertainty exist [21].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' EVSI calculations, in particular, often result in intractable expressions of multiple integrals where only numerical methods can yield results [25].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Even then, many numerical methods still require further simplifying assumptions (see, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=', [36]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' While it is possi- ble to approximate VOI computations via normal approximations (see, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=', [30, 19]) or using a computationally intense simulation-based methodology (see, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=', [10, 37]), closed-form expressions yield instantaneous and accurate value computations with more interpretable insights regarding the effects of prior beliefs and sample sizes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' In this paper, we provide a new EVSI calculation for a flexible (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' multi- nomial) data-generating process that adheres to three desiderata outlined in [34, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='44]: 2 Tractable EVSI is easily calculated using a closed- form expression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Rich A decision maker’s prior beliefs and in- formation are readily incorporated as part of the calculation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Interpretable The expression for EVSI provides insight as to the effects of prior beliefs and sam- ple size choices on the expected value of a sample.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Generating Process Conjugate Prior Source Bernoulli(θ) (θ) ∼Beta [34] [32] Poisson(λ) λ ∼gamma [34] Normal(µ, σ) µ ∼ Normal, σ known [34] µ known, σ2 ∼ inv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Gamma [34] σ2 ∼ inv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Gamma, µ|σ2 ∼ Normal [34] Multinomial(t)1 t ∼ Dirichlet This Paper Table 1: Position of this paper in comparison to other tractable EVSI cal- culations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Shown in Table 1, our point of departure is generalizing the EVSI cal- culation for a Bernoulli data-generating process with beta prior (a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' a beta-binomial model) to the case of a multinomial data generating process with Dirichlet prior.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Rich treatment and illustrative examples surround- ing EVSI calculations for the beta-binomial conjugacy can be found in [15].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Additionally, [32] provide explicit closed-form value of information compu- tations for the beta-binomial case and is very close in spirit to this work, but does not investigate the Dirichlet-multinomial setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' In relation to the multinomial sampling process we explore in this paper, existing work has focused on non-utility based approaches where data is valued based on its ability to bound a parameter of interest within a certain level of preci- sion [1, 6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Our approach, in contrast, extends the utility-based valuation of sampling to a multinomial sampling environment to yield closed-form expressions for both EVSI and the expected value of distribution informa- tion (EVDI).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Publication of analytically tractable expressions will be able 3 to supplant the still-present usage of Monte Carlo simulation in multinomial settings (see, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=', [38]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' When closed-form EVSI expressions are unavailable,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' quantification of value created through uncertainty reduction typically relies on one of three techniques: 1) closed-form expressions on the upper bound of the value of data,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' 2) simulated comparisons between valuing decisions made by an or- acle who knows the underlying data distribution to decisions made by a less-informed decision maker,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' or 3) using variance-reduction as a proxy for how data reduces underlying uncertainty in the data-generating process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' For examples of the first type, [27] bound EVPI for a risk-averse decision maker and [40] place an upper bound on the value of knowing the true distribu- tion when one already knows the mean and variance of that distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Examples of the second type often compare a Bayesian updating procedure to a known optimal solution [8, 29, 7, 35].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Lastly, computing the value of variance reduction independent of the specific quantity of data is also seen within the literature [11, 22].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' 2 Problem Setup Despite substantial efforts, notation for preposterior analysis has not been standardized and is often a matter of personal taste [33].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' To aid the reader with this paper’s notation surrounding its random variables and their real- izations, we present the following summary breaking the notation into three levels of analysis: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Data/Sample.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Data is an integer-valued random variable with support {0, 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' , M}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Sample is a random vector referring to either a sequence of n data observations or a vector of counts representing the number of occurrences of each potential data value recorded in n observations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' D: A random variable representing a single data observation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' d: A single realization of D with integer valued support: d ∈ {0, 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' , M}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' X ≡ (X1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' , Xn): A random vector of n observations of D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' x ≡ (x1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' , xn): A realization of data vector X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Dn: The support of X when n realizations are observed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' nk: the number of times that k ∈ {0, 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' , M} appears in x.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' (n0, n1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' , nM): A vector of counts of occurrences for each potential data value.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Data/Sampling Distributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Data and sampling distributions are iden- tical terms referring to the probability distribution governing the data-generating process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Data distribution refers to generating individual data points and sampling 1With support interpreted as a sequence of integer values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' 4 distribution preferred when talking about a sequence of observations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' T ≡ (T0, T1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' , TM): A random vector representing a data distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Random elements Tk are data distribution parameters representing the probability of data realization being k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' t ≡ (t0, t1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' , tM): A realization of random vector T such that tk = p(D = k) for k ∈ {0, 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' , M}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' t∗: The “true” data distribution or sampling distribution;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' only knowable by an oracle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' T : The space or set of all possible data distributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' T, t, t∗ ∈ T .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Prior/Posterior Distributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Continuous multivariate probability distri- butions with domain of all possible data distributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' π: A prior from which data distributions are generated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' πX: A posterior that updates π in light of data X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='1 Modelling Data and Loss Consider a data-generating process that generates independent and identi- cally distributed samples from a bounded sequence of M + 1 integers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' For notational simplicity, we rescale the sequence to be [M] ≡ {0, 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' , M}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' For practical motivation, the data could represent product demand and the goal is to make accurate predictions for inventory control [39].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' For the specific case of demand uncertainty, we note that there are asymmetric and other loss functions that would be preferred to the quadratic loss function used here, but closed-form expressions are not forthcoming for those cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' The data-generating process is governed by an unknown data distribu- tion, t, with discrete-finite support [M].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Thus the statistical model for the data-generating process is parameterized by the standard M-dimensional simplex of probabilities T = {t = (t0, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' , tM) ∈ RM+1 + : t0 + .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' + tM = 1};' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' this infinite (but finite-dimensional) parameter space describes how we are labeling the potential data distributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' If the sample size of the data is n, we have n values x1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' , xn ∈ [M] being generated by the data-generating process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' For a given t ∈ T , the associated data-generating process p(n) t assigns probability p(n) t (x1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' , xn) = n � i=1 txi (1) 5 to this particular sequence of data values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' In particular, if the sample size is 1, the data-generating process is simply given by pt(d) ≡ p(1) t (d) = td, d ∈ [M].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' It is clear that the number of occurrences of particular data values in the sample is a sufficient statistic for the model described, and that the sam- pling distribution for this sufficient statistic is just the multinomial model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Specifically, if nd = |{1 ≤ i ≤ n : xi = d}|, then (n0, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' , nM) is a sufficient statistic, and we have, with obvious abuse of notation, pt(n0, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' , nM) = � n n0 · · · nM � M � d=0 tnd d .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' (2) Note that n0+.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='+nM = n by definition;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' so we do not write the superscript (n) when using the sufficient statistic to represent the data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' When making predictions for future data, ideally the action (or predic- tion) is close to the actual data realization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' For tractability, we consider a quadratic terminal opportunity loss function for a single prediction to be of the following form: ℓ(d, a) = k(d − a)2 (3) where k > 0 is a known constant, a is the action/prediction, and d ∈ [M] is the actual data realization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' To briefly make the above notation more concrete, let’s imagine fore- casting product demand for a product that will sell between 0 and 5 units (M = 5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Each period’s i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='d demand, d ∈ {0, 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' , 5}, has an associated probability of occurrence, pt(0), pt(1), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' , pt(5), which is represented more compactly as t0, t1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' , t5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' The effectiveness of any action will be measured using quadratic loss scaled by a factor k such that if k = 5, d = 4, and a = 1, then ℓ(4, 1) = 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' The decision maker is contemplating the value of n = 3 observations where generated data, (x1, x2, x3), might be something like (0, 5, 0) and the associated sufficient statistic of counts, (n0, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' , n5), would be (2, 0, 0, 0, 0, 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Note that t ≡ t0, t1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' , t5 parameterizes both the data-generating process of eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' (1) yielding (x1, x2, x3) and the equivalent sampling process of eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' (2) yielding (n0, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' , n5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' As a result, we refer to t as both data distribution and sampling distribution depending on context.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' 6 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='2 Preposterior Analysis For any data distribution t, define the expectation of loss as: R(t, a) = ED|T=t [ℓ(D, a)] = M � d=0 pt(d)ℓ(d, a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' (4) where R(t, a) is known as the Bayes risk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Since a decision maker (DM) does not know the underlying “true” t∗ ∈ T data distribution, the minimum Bayes risk, mina R(t∗, a), is likely unachievable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' For a DM, risk is evaluated on an average basis based on the probability distribution the DM places over the simplex T .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Without any sample obser- vations, this distribution is the prior π over all possible data distributions in T .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' The average risk of taking action a using prior π is ¯R(π, a) = ET [R(T, a)] , (5) with T ∼ π.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' The Bayes action for π is a∗(π) = arg min a∈A ¯R(π, a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' (6) The Bayes risk for π is ¯R(π, a∗(π)) = min a∈A ¯R(π, a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' (7) Access to a sample X ≡ (X1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' , Xn) results in a different decision with different risk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' With sample observations, the DM applies Bayes’ rule to update π to πX (the posterior) and calculates the associated optimal Bayes action a∗(πX).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Since X is unknown prior to actually collecting the sample, the Bayes risk for πX is itself a random variable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Hence, we evaluate the DM’s prior expectation of loss with sample information over all possible samples X, EX � ¯R(πX, a∗(πX)) � = ET EX|T [R(T, a∗(πX))] , (8) with T ∼ π and the right-hand side expression derived by substituting πX for π in eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' (5) and applying the law of total expectation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Thus, the expected value of a sample of information (EVSI), Vn(π), is the difference between the prior expectations of loss with and without sample X under prior π: Vn(π) = ¯R(π, a∗(π)) − EX � ¯R(πX, a∗(πX)) � (9) = ET [R(T, a∗(π))] − ET EX|T [R(T, a∗(πX))] (10) 7 where T ∼ π and eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' (10) follows from eqs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' (5) and (8).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Proposition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='1 for- malizes our intuition that this expected value of sample information should be non-negative.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Proposition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Suppose data distribution T ≡ (T0, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' , TM) is drawn from a given prior π.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Assume further that a DM is given n samples X ≡ (X1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' , Xn) and updates his/her prior to the posterior πX.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Then, under quadratic loss, the expected value of these n samples is non-negative, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Vn(π) = ET [R(T, a∗(π))] − ET EX|T [R(T, a∗(πX))] ≥ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' (11) Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' See Appendix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' □ Because the ordering within the sample X does not matter, the inner ex- pectation in (11) is performed over (n0, n1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' , nM) ∼ Multinomial(t) con- ditioned on T = t where nj is the number of times that j ∈ [M] appears in the sample, and the outer expectation is performed over T ∼ π.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' 3 Tractable Valuation of Sample Information To arrive at a tractable valuation for (10), we leverage the Dirichlet distri- bution as a prior for three reasons: 1) it is a conjugate prior to categori- cal/multinomial outcomes, 2) its support is the M-dimensional simplex T , and 3) it has flexibility to model many types of prior information for the de- cision maker.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' With the Dirichlet assumption, the main result of this paper, Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='1, can be presented: Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' For data distribution T with support [M] and prior π = Dirichlet(α0, α1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' , αM), the expected reduction in quadratic loss after ob- serving n data samples, also called the expected value of sample information (EVSI), is given by: Vn(π) = kn(c2 − c2 1) (n + α)(1 + α).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' (12) where α = �M d=0 αd is the precision/concentration parameter of the Dirich- let distribution (see [16]) and c1 = 1 α �M d=0 dαd and c2 = 1 α �M d=0 d2αd are the first and second moments of the data under the marginal likelihood (α1, α2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' , αM)/α.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' See Appendix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' □ 8 Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='1 gives the expected value of observing an n-trial multinomial sample with Dirichlet prior where support of the underlying data-generating process is the bounded sequence of integers [M] = {0, 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' , M}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' This is a natural generalization of valuing an n-trial binomial sample with beta prior where support of the underlying data-generating process is restricted such that [M] = {0, 1}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' With just a slight change of notation, we know from [32] that EVSI for the beta-binomial case in closed-form is: kn n + α0 + α1 α0α1 (α0 + α1)2(α0 + α1 + 1) (13) where π ∼ Beta(α0, α1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Replacing this prior with the equivalent Dirichlet parameterization of π ∼ Dirichlet(α0, α1) and using Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='1 yields an identical result: Vn(π) = kn(c2 − c2 1) (n + α)(1 + α) = kn (n + α0 + α1) · α1 α0+α1 − α2 1 (α0+α1)2 (α0 + α1 + 1) = kn (n + α0 + α1) · α0α1 (α0 + α1)2(α0 + α1 + 1) (14) As a direct consequence of Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='1, when n → ∞, we have an expression for the expected value of distribution information (EVDI), as an infinite sample gives the data distribution exactly: V∞(π) = lim n→∞ Vn(π) = k(c2 − c2 1) 1 + α .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' (15) Lastly, we can express the efficiency η of the sample information as a function of the number of sample points using the ratio of (12) to (15) as: η = n n + α.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' (16) Hence, the percentage of value obtained through sampling is given by the ratio of the number of data points n to the sum of the n data points and the concentration parameter α of a Dirichlet distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' This sam- pling efficiency calculation directly simplifies to the known formula of the beta-binomial case from [34](in our notation): η = n/(α0 + α1) where π ∼ Beta(α0, α1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Again, we make the notation more concrete, by revisiting our forecasting product demand example from the end of §2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Recall, we have a product 9 that will sell between 0 and 5 units (M = 5) and loss is scaled by k = 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' The decision maker is contemplating the value of n = 3 observations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Introducing a zero-inflated prior π ∼ Dirichlet( 10 6 , 1 6, 1 6, 1 6, 1 6, 1 6) means α = 15 6 , c1 = 6 15 · (0 · 10 6 + 1 · 1 6 + 2 · 1 6 + 3 · 1 6 + 4 · 1 6 + 5 · 1 6) = 1, c2 = 6 15 · (0 · 10 6 + 1 · 1 6 + 4 · 1 6 + 9 · 1 6 + 16 · 1 6 + 25 · 1 6) = 11 3 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Plugging into eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' (12) yields EVSI V3(π) = 160 77 ≈ 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='08.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' and EVDI V∞(π) = 80 21 ≈ 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' From eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' (16) we get η = 6 11 ≈ 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='5%, so the learning from n = 3 samples is expected to provide more than half of the maximum possible reduction in loss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Following from eqs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' (26) - (31), a∗(π) = 1 and prior expected loss ¯R(π, a∗(π)) = 5 · (−12 · 10 15 + 02 · 1 15 + 12 · 1 15 + 22 · 1 15 + 32 · 1 15 + 42 · 1 15) = 40 3 ≈ 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' And thus, we can also get the prior expectation of posterior loss EX � ¯R(πX, a∗(πX)) � = ¯R(π, a∗(π)) − V3(π) = 40 3 − 160 77 ≈ 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' 4 Notes on Richness and Interpretability of Mod- elings Assumptions In the previous section, we showed one of the three EVSI desiderata, tractabil- ity, can be achieved for a multinomial data-generating process with Dirichlet prior.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' The multinomial distribution is flexible enough to model any discrete (finite) data distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Its prior, the Dirichlet distribution, is also flexible in its ability to model a wide range of distributions over a simplex.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Yet, some sacrifice of richness in modeling prior beliefs is made in the name of tractabil- ity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Most notably, a more rich/flexible alternative prior over a simplex is the logistic-normal distribution [3, see discussion in].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' The most glaring weak- ness of the Dirichlet distribution is in modeling prior beliefs where there is some type of correlation structure between data observations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' For example, observing a high data value, say 100, would make one think values of 101 and 99 are also more likely to occur than data values further away.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' How- ever, the Dirichlet distribution, as a prior distribution to multinomial data, is unable to capture this structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Notably, the distribution-free underpin- nings of the Kaplan-Meier estimator also ignore this potential correlation among data observations, yet shows favorable results in a similar repeated newsvendor setting [17] .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' The richness of the Dirichlet prior is best seen through the lens of its intu- itive reparameterization [16].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Let the concentration parameter α = �M i=0 αi and let the vector m = � α0 α , α1 α , .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' , αM α � represent the mean where the expected mean of the data observations is given as c1 = 1 α �M i=0 iαi = �M i=0 imi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' When α is small, say α ≤ M, the prior distribution over the simplex can differ greatly from m and reflect a decision maker’s uncertainty 10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='75 α0 α5 α10 α15 α20 Dirichlet Parameter Parameter Value Dirichlet Shape Parameter for M = 20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='15 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='20 p0 p5 p10 p15 p20 Multinomial Parameter Parameter Value 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='15 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='20 p0 p5 p10 p15 p20 Multinomial Parameter Parameter Value Sample Realizations of Multinomial Parameters EVDI 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='0 0 10 20 30 40 50 # of samples (n) value EVSI EVSI as Function of n for M = 20 Concentration Parameter = 10 0 1 2 3 4 α0 α5 α10 α15 α20 Dirichlet Parameter Parameter Value Dirichlet Shape Parameter for M = 20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='10 p0 p5 p10 p15 p20 Multinomial Parameter Parameter Value 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='000 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='025 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='050 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='075 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='100 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='125 p0 p5 p10 p15 p20 Multinomial Parameter Parameter Value Sample Realizations of Multinomial Parameters EVDI 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='4 0 10 20 30 40 50 # of samples (n) value EVSI EVSI as Function of n for M = 20 Concentration Parameter = 50 Figure 1: Graphical depiction of the Dirichlet prior parameters, poten- tial realizations for that prior (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' the multinomial parameters), and the EVSI/EVDI calculations as a function of n samples for the given prior.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Top row for concentration parameter α = 10 and bottom row for concentration parameter α = 50 11 around their expectation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' As α is made larger, the prior distribution will concentrate probability density near m and reflect greater confidence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' We present a graphical overview of this in Figure 1 for two different concentra- tion parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' As seen, when α is smaller (top row of Figure 1) the real- ized multinomial parameters (middle-top plot) can be further away from the mean m (which is proportional to the parameters in the top-left plot).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' As α increases (bottom-row) the prior distribution becomes much more informa- tive and multinomial parameters will most likely mirror the prior Dirichlet parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' In terms of interpretability, Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='1 formalizes our intuition about what drives the value of data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Specifically, data is valuable when 1) the sample contains a lot of data (high n), 2) the expected variance of the data distribution is large (high c2 − c2 1), and 3) there is a lot of uncertainty regarding the true data distribution (α is small).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Additionally, the calcu- lation for EVDI (eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' 15) gives an interpretable upper bound on the value of data where high variance pushes to make samples more valuable and a high concentration parameter makes samples less valuable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Lastly, the equa- tion for efficiency (16) adds further insight by stating how quickly the upper bound on the value of data is approached;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' basically, the smaller the Dirichlet concentration parameter, the more quickly EVDI is approached with each subsequent data point.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' 5 Illustrative Examples In this section, we demonstrate how the tractable formulation for EVSI, equation (12), can serve as a building block inside of other research initia- tives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' The first example explores sample size optimization and the second example shows how a tractable EVSI calculation can lead to a tractable de- cision policy in a two-stage production planning problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' In the third/last example, the EVSI formula provides a foundation from which to benchmark heuristic updating procedures that seek to estimate an underlying unknown data distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='1 The Choice of Sample Size We now explore a decision maker’s objective to choose the number of sam- ple points to collect in such a way as to minimize his expected loss when assuming expected sampling cost, Cs(n), is a linear function of the number of sampled points n: Cs(n) = K + sn (17) 12 where s is the cost of one sample and K represents the fixed costs of sam- pling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' The loss function to be minimized, ℓs(n), combines equations (12) and (17): ℓs(n) = − kn(c2 − c2 1) (n + α)(1 + α) + K + sn (18) And assuming for practical purposes that n can be treated continuously, we get the optimal sample size: n∗ = � α (1 + α) k s (c2 − c2 1) − α (19) for cases where n∗ is positively valued and the fixed costs of sampling K can be recovered, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Vn(π) > Cs(n∗).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' In all other cases, n∗ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Equation (19) has a nice economic interpretation where the three terms represent the strength of the prior, the ratio between the scaling of the quadratic loss costs and the unit sampling costs, and the predicted variance of the data distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='2 Two-Stage Production Planning The example shown here is a simple two-stage production planning problem (see, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=', [9]) where the decision maker seeks to optimally schedule the 2nd production run.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Assume J periods make up a selling season.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Each period, j ∈ J faces in- dependent and identical categorical demand with Dirichlet prior and quadratic loss (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' a repeated newsvendor setting with quadratic loss) with identical shipments scheduled for each period.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' A decision maker can choose either 1) to schedule the delivery quantity for each period in the entire selling season or, 2) at cost K can specify a period j∗ after which the scheduled delivery quantity can be changed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Assuming this change date will be contractually set in advance of the selling season, find j∗ to minimize expected net costs over the entire season J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' The net cost function for this problem is: C(j) = � � � 0, if j = 0, K − (J − j) kj(c2 − c2 1) (j + α)(1 + α), if j ∈ (0, J] (20) 13 When j ∈ (0, J], the net cost function C(·) is strictly convex and has a unique global minimum value.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' The optimal period j∗ is j∗ = arg min j∈{0,1,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=',J} C(j) When min C(j) = 0 for 0 < j ≤ J, we choose j∗ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' For the case when min C(j) < 0, we have j∗ = � α(J + α) − α Considering that j∗ must be a non-negative integer, summarizing differ- ent cases we have the optimal j∗ as j∗ = � � � � � 0, if min j∈[0,J]C(j) = 0, arg min j∈{⌊j0⌋,⌈j0⌉} C(j), if min j∈[0,J]C(j) < 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' (21) where j0 = � α(J + α) − α.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='3 Benchmarking Data-Driven Algorithms An active area of research is to propose algorithms for decisions in repeated settings where minimal assumptions about the underlying data distribution are known.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' These approaches include Sample Average Approximation(SAA) [24, 23], concave adaptive value estimation (CAVE) [12], and Second Order Belief Maximum Entropy (SOBME) [35].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' When benchmarking these algo- rithms, it is customary to pick a handful of “true” distributions where the algorithm competes against a known optimal solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' With the introduction of a closed-form EVSI calculation in the context of a Dirichlet prior, a more robust benchmarking scenario can be achieved.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Instead of picking a “true” data distribution, we pick a “true prior” from the Dirichlet family with support matching the problem of interest.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' This prior can be used to then simulate “true” data distributions (as many as we want) by which we can estimate the reduction in squared loss as a function of n, the number of data samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Given this setup, a comparison of a proposed algorithm can be made against a known optimal updating procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' After all, it is the updating procedure that we are seeking to validate, and the opti- mal updating procedure to benchmark new algorithms against is, therefore, the Bayesian one detailed in the proof of Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='1 (see appendix).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' As a proof of concept, Figure 2 is an example benchmarking the well- known sample average approximation (SAA) (see [24]) against the known op- timal Bayesian updating procedure (BAYES) using a Dirichlet(α0, α1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' , αM) 14 GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG Optimal Squared Loss (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' distribution known) 18 20 22 0 10 20 30 40 # of sample data points expected quadratic loss Updating Method G BAYES SAA Expected Loss as Function of n for M = 20 Figure 2: Comparing the sample average approximation(SAA) updating procedure to the known Bayesian (BAYES) optimal updating procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' prior with M = 20, α = 10, and m ∝ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 13, 11, 9, 7, 5, 3, 1} (chosen to be slightly skewed).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' In this scenario, we see the value of prior in- formation in small data settings as BAYES outperforms SAA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' It also shows how as the amount of data increases, the non-parametric SAA algorithm’s performance improves and closely mimics that of the optimal Bayesian up- dating procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' 6 Conclusion The use of preposterior analysis in this paper provides a formal method for valuing data prior to its collection and as such, should serve as a build- ing block in many systems and models going forward.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' By expanding the support of the underlying data-generating process from [M] = {0, 1} to [M] = {0, 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' , M}, the beta-binomial EVSI calculations are successfully generalized to a Dirichlet-multinomial setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Using this new EVSI com- putation, three illustrative examples valuing data prior to its collection are shown, there are potentially many other contexts where this tractable formu- lation might also prove useful.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Researchers in two particular areas, medical decision making and active (machine) learning are known to be interested in EVSI types of calculations (see, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=', [2, 13, 18, 31]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' And we look forward to hearing of other useful deployments for this method of valuing data prior to its collection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' 15 A Proof of Proposition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='1 and Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='1 A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='1 Proof of Proposition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='1 The expected value of sample information is Vn (π) = ET [R(T, a∗(π))] − ET EX|T [R(T, a∗(πX))] .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' (22) For the first term in eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' (22), we have ET [R(T, a∗(π))] = kET � ED|T � (D − a∗ (π))2�� = kET � ED|T � (D − E [D])2�� = kED � (D − E [D])2� = kVar [D] .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' (23) The second line is due to the optimal action under squared loss being the mean (see eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' (30)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' The third line of equation (23) follows from the law of total expectation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Thus, the optimal Bayes risk without sample information under quadratic loss (3) is the marginal variance of D scaled by a factor k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Similarly, for the second term in eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' (22) we find ET � EX|T [R (T, a∗ (πX))] � = kET � EX|T � ED|T � (D − a∗ (πX))2��� = kET � EX|T � ED|T �� D − ED|X [D] �2��� = kEX � ED|X �� D − ED|X [D] �2�� = kEX � VarD|X [D] � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' (24) The optimal Bayes risk under quadratic loss (3) if a sample of size n is to be collected is the expected variance of the predictive posterior distribution of D scaled by a factor k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Combining (22),(23), and (24), we complete the proof: Vn (π) = ET [R(T, a∗(π))] − ET � EX|T [R (T, a∗ (πX))] � = kVar [D] − kEX � VarD|X [D] � = k � Var [D] − EX � VarD|X [D] �� = kVarX � ED|X [D] � ≥ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' (25) 16 The last equal sign in equation (25) follows from the law of total variance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Since k > 0 and VarX � ED|X [D] � ≥ 0 for any X, we have Vn (π) ≥ 0 for any sample size n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' □ A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='2 Proof of Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content='1 Consider the prior distribution for the data-generating process π = Dirichlet(α0, α1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' , αM).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Suppose our information consists of n samples of the data distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Let nj, j ∈ [M] be the frequency of the data being j so that nj are integers such that �M j=0 nj = n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Then, because the multinomial and Dirichlet distribu- tions are conjugate, πX = Dirichlet(α0 + n0, α1 + n1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' , αM + nM).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Because π and πX both belong to the same class of distributions, we derive closed-form valuations for the information X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' The corresponding marginal likelihoods for π and πX are qπ(d) = αd α , qπX(d) = αd + nd α + n , where α = �M i=0 αi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' If the information happens to occur in such a way that nj ∝ αj for each j, then the updated marginal likelihood is unchanged: qd(π) = qd(πX), d ∈ [M].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' For convenience, define the quantities Z = 1 n M � d=0 dnd, c1 = 1 α M � d=0 dαd, c2 = 1 α M � d=0 d2αd, where Z represents the average frequency of the sample, c1 the prior expec- tation for a sample value, and c2 the prior second moment for the sample value.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' 17 Given the loss function in (3), the Bayes risk and action without sample information can be explicitly calculated ¯R(π, a) = ET∼π[R(T, a)], (26) = ET∼π � M � d=0 pT (d)ℓ(d, a) � , (27) = M � d=0 ℓ(d, a)ET∼π[pT (d)], (28) = M � d=0 ℓ(d, a)qπ(d), (29) where {qπ(0), qπ(1), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' , qπ(M)} is the marginal likelihood.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' The Bayes action minimizes eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' (29): ∂ ¯R(π, a) ∂a = −2k M � d=0 (d − a)qπ(d) = −2k � M � d=0 dqπ(d) − a M � d=0 qπ(d) � = 0, ⇒ a∗(π) = M � d=0 dqπ(d), = Eqπ[D], (30) = c1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' (31) the mean data outcome under the prior marginal likelihood.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' The corre- sponding Bayes Risk is ¯R(π, a∗(π)) = k M � d=0 (d − a∗(π))2qπ(d), = kVarqπ[D], = k(c2 − c2 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' 18 Similarly, with sample information we have ∂ ¯R(πX, a) ∂a = −2k M � d=0 (d − a)qπX(d), = −2k � M � d=0 dqπX(d) − a M � d=0 qπX(d) � = 0, ⇒ a∗(πX) = M � d=0 dqπX(d), = EqπX [D], = αc1 + nZ α + n , (32) which is the mean data outcome under the posterior marginal likelihood.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' Now expressing EVSI as Vn(π) = ¯R(π, a∗(π)) − ET EX|T R(T, a∗(πX)), (33) note the inner expectation is taken over the data frequency, which follows a multinomial distribution: (n0, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' , nM) ∼ Multinomial(pt(0), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' , pt(M))), and the outer expectation is taken over all possible distributions pt∗ ∼ Dir(α0, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' , αM).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' The first term in (33) has already been evaluated as k(c2 − c2 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' We now calculate the second term.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' R(t, a∗(πX)) = k M � d=0 pt(d)(d − a∗(πX))2 = k M � d=0 pt(d) � d − αc1 + nZ α + n �2 ⇒ EX|T=t [R(t, a∗(πX))] = k M � d=0 pt(d) � d2 − � 2nd α + n − 2nαc1 (α + n)2 � EX[Z] −2dαc1 α + n + α2c2 1 (α + n)2 + n2 (α + n)2 EX[Z2] � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' (34) 19 Since Z(n0, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' nM) = 1 n �M d=0 dnd,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' EX|T=t[Z] = M � d=0 dpt(d),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' EX|T=t[Z2] = VarX|T=t[Z] + � EX|T=t[Z] �2 = 1 n M � d=0 d2pt(d) + (n − 1) n � M � d=0 dpt(d) �2 ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NAyT4oBgHgl3EQf0_na/content/2301.00729v1.pdf'} +page_content=' where the last line follows from the fact VarX|T=t[Z] = VarX|T=t � 1 n M � d=0 dnd � = 1 n2 VarX|T=t � M � d=0 dnd � = 1 n2 � � � M � d=0 d2VarX|T=t [nd] + 2 M � 0=i