diff --git "a/99E1T4oBgHgl3EQf8QUL/content/tmp_files/2301.03542v1.pdf.txt" "b/99E1T4oBgHgl3EQf8QUL/content/tmp_files/2301.03542v1.pdf.txt" new file mode 100644--- /dev/null +++ "b/99E1T4oBgHgl3EQf8QUL/content/tmp_files/2301.03542v1.pdf.txt" @@ -0,0 +1,2939 @@ +arXiv:2301.03542v1 [math.ST] 9 Jan 2023 +A Sequential Test for Log-Concavity +Aditya Gangrade1, Alessandro Rinaldo1 and Aaditya Ramdas12 +agangra2@andrew.cmu.edu, arinaldo@cmu.edu, aramdas@cmu.edu +1Department of Statistics and Data Science, Carnegie Mellon University +2Machine Learning Department, Carnegie Mellon University +Abstract +On observing a sequence of i.i.d. data with distribution P on Rd, we ask the question of how one can +test the null hypothesis that P has a log-concave density. This paper proves one interesting negative and +positive result: the non-existence of test (super)martingales, and the consistency of universal inference. +To elaborate, the set of log-concave distributions L is a nonparametric class, which contains the set G of +all possible Gaussians with any mean and covariance. Developing further the recent geometric concept of +fork-convexity, we first prove that there do no exist any nontrivial test martingales or test supermartingales +for G (a process that is simultaneously a nonnegative supermartingale for every distribution in G), and +hence also for its superset L. Due to this negative result, we turn our attention to constructing an e- +process — a process whose expectation at any stopping time is at most one, under any distribution in L +— which yields a level-α test by simply thresholding at 1/α. We take the approach of universal inference, +which avoids intractable likelihood asymptotics by taking the ratio of a nonanticipating likelihood over +alternatives against the maximum likelihood under the null. Despite its conservatism, we show that the +resulting test is consistent (power one), and derive its power against Hellinger alternatives. To the best of +our knowledge, there is no other e-process or sequential test for L. +1 +Introduction +Log-concavity is an important and prevalent modelling assumption in the modern study of shape-constrained +nonparametrics [Sam18]. +Log-concave distributions include many common families of densities, including +normal, exponential, extreme-value, and logistic distributions, and further are frequently justified in diverse +application domains including economics, reliability theory and filtering in engineering, and survival analysis +in medicine [BB06]. At the same time, the family is technically amenable, and admits a unique maximum +likelihood estimate with a well developed minimax theory and computationally efficient estimators [CS10; +CDSS18; KDR19; Axe+19; DR11; RS19; CSS10]. +As a result, log-concave densities offer practitioners a +broadly applicable and usable structure. +Given the attractive properties of estimation within the log-concave family, tests for membership in the +same are an important and necessary line of investigation. We note that along with the applications mentioned +above, such tests also have theoretical interest; for instance, in much of computational learning theory, efficient +learning algorithms are only known when covariates are sampled according to a log-concave distribution [e.g. +KKMS08]. While the estimation of log-concave densities has seen significant advances over the past decade +or two (see, e.g., the citations above, and the survey by Samworth [Sam18]), testing for log-concavity has +been relatively poorly developed. Indeed, prior to 2021, there were no valid and powerful tests for the same— +both theoretically and practically—outside of certain restricted one-dimensional settings. +In a significant +development, recent work of Dunn et al. +[DGWR21] has developed such a test, based on the Universal +Inference strategy of Wasserman et al. [WRB20]. +Our work is concerned with testing log-concavity in a sequential setting. Concretely, we assume that we +are given streaming access to a sequence {Xt} that are drawn independently and identically from some d- +dimensional density p, and we wish to test the membership of p within the family of log-concave densities. +Such a sequential test can be identified with a stopping time τ, where stoppage indicates rejection of the null +1 + +hypothesis, and the test is α-valid if under the null, the probability that τ < ∞ is bounded by α. The principal +attractiveness of such sequential tests arises from their adaptivity: rather than fixing a number of samples a +priori, the test may adapt to the difficulty of the underlying instance, rejecting earlier in easier settings, and +allowing for a greater number of samples to detect subtle deviations from the null hypothesis. +Below, we first set up some notation, and then proceed to contextualise our study, and give a brief overview +of the contributions of our paper. +1.1 +Problem setup and background +We begin by describing the notation needed for our discussion, the testing problem under consideration, and +the fundamental notions of test martingales and e-processes. We shall give further definitions and details in +§2, as well as later in the text as the context arises. +Spaces and measures. Let {Xt} = (X1, X2, . . . ) denote a sequence of d-dimensional random vectors with entries +indexed by t, which are measurable maps from Ω := (Rd)N to Rd, endowed with the cylindrical Borel sigma- +algebra B(Rd)N. We use typewriter style fonts, e.g. P, to denote laws of random processes (i.e. probability +measures on (Ω, B(Rd)N)), and standard fonts, e.g. P to denote laws on (Rd, B(Rd)). We use F = {Ft} +to denote the natural filtration of the process {Xt}, where Ft := σ(X1, . . . , Xt), for each t. For a Borel +probability measure P on Rd, we use P ∞ to denote the law of an i.i.d. process drawn according to P. We +use D to denote the set of probability measures on Rd with Lebesgue densities, and D∞ = {P ∞ : P ∈ D}. +For P ∈ D, we use p to denote its Lebesgue density. For technical convenience we define D1 := {P ∈ D : +E[max(0, log p(X))] < ∞, E[∥X∥] < ∞}. A set of laws P is said to be mutually absolutely continuous (m.a.c.) +if for all P, Q ∈ P, P ≪ Q ≪ P. Finally, we frequently use Xt +1 := (X1, . . . , Xt) to denote finite prefixes of +{Xt}. +Log-concave measures. A function f : Rd → R is said to be log-concave if there exists a concave function +g such that f = eg. If f is further a density with respect to the Lebesgue measure, then it is said to be a +log-concave density. We denote the set of measures with log-concave densties as L, and use L∞ to denote the +set of i.i.d. log-concave measures on euclidean sequences, i.e. L∞ := {P ∞ : P ∈ L}. +Sequential test for log-concavity. +The testing problem of interest is formulated as follows: Let Xt +i.i.d. +∼ P +for some unknown P ∈ D. We wish to test the null hypothesis H0 : P ∈ L. +A sequential test corresponds to {Ft}-adapted stopping time, representing the (possibly infinite) time at +which the test stops and rejects the null hypothesis. We shall refer to this stopping time as the rejection time +of the sequential test. A test is said to be α-valid if its rejection time, τ satisfies that +sup +P ∈L +P ∞(τ < ∞) ≤ α, +meaning that, under the null, the probability of ever rejecting, i.e. of incurring a Type I error, is at most α. +Similarly, a test is said to be asymptotically (1 − β)-powerful against Q ⊂ D \ L if the probability of failing to +reject the null under any distribution in the alternative Q (also know as a type II error) is uniformly bounded +by β: +inf +Q∈Q Q∞(τ = ∞) ≤ β. +A test is said to be consistent against Q if it is asymptotically 1-powerful against the same. Note that, when +consistent, these tests are typically called ‘power-one tests’ (following Robbins) to differentiate them from the +traditional Waldian sequential testing paradigm for which stopping does not imply rejection of the null. +Test martingales, test supermartingales and e-processes. +We briefly survey key notions underlying +our discussion, namely test martingales, and e-processes, leaving details to §3 and §4 respectively. +Definition A process {Mt} is a nonnegative supermartingale (NSM) with respect to a filtration {Ft} and +a law P if it is adapted, nonnegative, and EP[Mt|Ft−1] ≤ Mt−1 for each t. If the inequality is further an +equality at each t, then {Mt} is a nonnegative martingale (NM). We shall succinctly say that such a process +is a P-NSM or P-NM respectively. +2 + +Obviously, every P-NM is also a P-NSM. An important basic inequality of Ville [Vil39] controls the tail +behaviour of NSMs: if {Mt} is a P-NSM such that M0 = 1, then for every α ∈ (0, 1], +P(∃t ≥ 1 : Mt ≥ 1/α) ≤ α. +The result above is a sequential (time-uniform) analogue of Markov’s inequality. Equivalently, one can make +claims at arbitrary stopping times: for all stopping times τ, P(Mτ ≥ 1/α) ≤ α. This can be seen by applying +the optional stopping theorem for NSMs [Mey66, Ch. V, Thm. 28] and Markov’s inequality. +We now extend the above notions to composite families of sequential laws. Throughout this paper we shall +take the filtration to be the natural filtration of the data, and will leave it implicit in our definitions below. +Definition For a set of sequential laws P, we say that a process {Mt} is a P-NSM if {Mt} is a P-NSM for +every P ∈ P. Similarly, {Mt} is a P-NM if it is a P-NM for every P ∈ P. A P-NSM such that M0 = 1 is +called a test supermartingale for P, and a P-NM such that M0 = 1 is called a test martingale for P. +Observe that test supermartingales satisfy Ville’s inequality for each P ∈ P, i.e., if {Mt} is a test super- +martingale for P, then for every α ∈ (0, 1], +∀P ∈ P, P(∃t ≥ 1 : Mt ≥ 1/α) ≤ α. +(1) +Test supermartingales are so named because they form the canonical path to sequentially testing composite +hypotheses, which is encapsulated entirely by the above relation, in that valid tests can be derived by rejecting +only when a test supermartingale crosses a threshold. They are particularly interesting in nonparametric +settings; for example one can use them to sequentially test the mean of a bounded random variable [WR23], +for testing symmetry [RRLK20], for two-sample testing [SR21], independence testing [PBKR22], and testing +calibration [AHZ21], to mention only a few interesting sequential nonparametric problems. We shall discuss +test supermartingales extensively in this paper. +E-Processes [RGVS22; RRLK20; GHK19; HRMS20] are a recently defined class of processes that will also +play a central role in this paper. +Definition A process {Et} is called an e-process with respect to a sequential law P if it is non-negative, and +for every stopping time τ, we have EP[Eτ] ≤ 1. Similarly, {Et} is an e-process for a class of sequential laws P +if it is an e-process with respect to every P ∈ P. +E-Processes have a variety of equivalent definitions [RRLK20, Lem. 6]. In particular it is sufficient for the +process to satisfy EP[Eτ] ≤ 1 for only bounded stopping times. +By the optional stopping theorem (which holds without restriction on stopping times for nonnegative +supermartingales), notice that every test supermartingale for a class P is also an e-process for this class. +Thus, e-processes generalise the notion of test supermartingales. We observe that a Ville-type relation also +holds for e-processes, simply due to Markov’s inequality: if Et is an e-process for P, then for every α ∈ (0, 1], +∀P ∈ P, stopping times τ, P(Eτ ≥ 1/α) ≤ αE[Eτ] ≤ α. +(2) +Much as Ville’s inequality over the class (1) captures the relevance of test supermartingales to sequential +testing, the above inequality captures the relevance of e-processes to the same. The notion of e-processes, +along with the non-sequential analogue of e-values, is gaining vogue in recent work in statistics due to this key +property, along with the fact that e-processes exist for many composite and nonparametric testing problems +for which test supermartingales do not exist (see, e.g., the recent survey by Ramdas et al. [RGVS22]). We +will also encounter this situation in the current paper. +It is important to note that test supermartingales or e-processes can directly be interpreted as evidence +against the null hypothesis: since we expect them to be less than one under the null, the larger their realized +value, the more evidence we have that the null hypothesis is wrong. +Thus, there is no explicit need to +threshold them at 1/α for some prespecified α; one can alternatively simply report the final value at the final +stopping time of the experiment (which can itself be arbitrarily chosen). Nevertheless, we present this paper +in the language of level-α tests because that is far more popular, and we refer the interested reader to the +aforementioned references for further discussion on e-processes. +3 + +1.2 +Inadequacy of Test (Super)Martingales, and the Power of E-Processes +One dominant (but sometimes hidden) principle behind sequential testing of composite hypotheses is the use +of nonnegative martingales (NMs), or nonnegative supermartingales (NSMs). Concretely, to test a composite +hypothesis P ∈ P, one attempts to construct a P-test supermartingale {Mt}, which was defined earlier. By +Ville’s inequality (1), the chance that Mt ever exceeds 1/α under any null law is bounded by α. Thus, these +test supermartingales immediately yield a valid test: reject the null when Mt ≥ 1/α. The associated rejection +time, of course, is the Mt-hitting time of 1/α. Such tests have game-theoretic interpretations, through the fact +that nonnegative (super)martingales represent wealth processes in betting games [RGVS22]. For example, a +P-test martingale is the wealth process of a gambler who bets against the hypothesis that the sequence {Xt} +is drawn according some law in P. The game is designed so that the gambler cannot hope to reliably (in +expectation) make money if the null hypothesis is true; this is imposed by a restriction that under any law in +P, the expected wealth multiplier in each round should be at most unity. +However, for sufficiently rich classes P, such a game leaves the gambler powerless; the gambler is so +constrained by the aforementioned restriction that the only option is to not bet at all (or throw away money). +This phenomenon was first observed in work on testing exchangability in discrete time binary processes by +Ramdas et al. [RRLK22], who demonstrated that any process {Mt} that is an NSM for all exchangable binary +laws is, almost surely, a strictly decreasing process (the wealth starts at one and can only possibly go down). +As a result, any test based on thresholding such processes must be powerless against any alternative. Our +first technical contribution demonstrates an anlogous phenomenon in the setting of log-concave distributions. +Specifically, we show that the smaller class of i.i.d. Gaussian processes is not testable using NMs (or NSMs), +since all such processes are trivial in the sense of being almost surely constant (or decreasing). The claim is +summarised below, where G∞ denotes the set of all i.i.d. Gaussian laws (of any mean and variance). +Theorem. (Informal) There are no nontrivial G∞-NSMs or G∞-NMs. A fortiori, there are also no nontrivial +L∞-NSMs or L∞-NMs. +Thus, log-concave densities represent a natural class of distributions that cannot be tested via martingales. +Testing via E-Processes. +Given that one cannot test for log-concavity (or indeed, Gaussianity) using +nonegative (super)martingales, we are left in a situation where the prevalent design paradigm for sequential +testing is neutralised. There are two contrasting lines of attack that can be employed instead. +The first of these involves designing a restricted filtration Gt, distinct from the natural filtration, under +which there might exist nontrivial test supermartingales. Ramdas et al. [RRLK22; RGVS22] highlight the +remarkable fact that shrinking a filtration could introduce new nontrivial (composite) test martingales when +none existed in the original filtration. Such a strategy was notably used by Vovk et al. [VNG03; FGNV12] to +develop a sequential test for exchangeability, where as mentioned above, no nontrivial test supermartingales +exist in the data filtration. There are two main disadvantages to such an approach. First, such test martingales +only yield an e-process for a restricted set of stopping times (those under the restricted filtration). From an +applied point of view, the use of such an e-process demands discipline from a practitioner—they cannot look +at the raw data to decide when to adaptively stop (a predefined stopping rule, like the hitting time of 1/α +is okay, but it may never be reached, in which case we may still wish to present the obtained evidence at +the stopping time). Second, from a design point of view, the construction of appropriate filtrations is itself a +subtle task that is heavily problem-dependent, and thus designing such tests is more of an art than a science. +In particular, no such construction is known or obvious for sequential log-concavity testing. +In contrast, we follow the alternative strategy of testing via an e-process. Recall that a process {Et} is +an e-process for a set of sequential laws P if, for every stopping time τ and every P ∈ P, EP[Eτ] ≤ 1. Such +processes bear a deep relationship to the aforementioned test martingales. Indeed, it has been argued that +(admissible) e-processes must take the form infP∈P M P +t , where each {M P +t } is a P-NM [RRLK20]. The same +observation lends e-processes a gambling interpretation as the wealth process of a gambler against a ‘family of +games’, wherein the gambler simultaneously plays a game against each P ∈ P, and their wealth is taken as the +smallest wealth amongst these games. The gambler can then make money only if each of these games makes +money, i.e., if ∀P ∈ P, M P +t grows without bound, which would then indicate that every P ∈ P can be rejected. +E-Processes offer a similar testing approach as the previously discussed test supermartingales, as elucidated +4 + +by the inequality (2). Indeed, given an e-process {Et} for P, we can construct an α-valid test of membership +in P by rejecting only if Et ≥ 1/α. Indeed, in this case, the rejection time is +τα := inf{t ≥ 1 : Et ≥ 1/α}, +and using the inequality (2), we may conclude that +∀P ∈ P, P(τα < ∞) ≤ α, +i.e. this test is valid for the composite null P. Note further that the validity extends beyond this: let σ be any +other stopping time with respect to the natural filtration of the data. We further have that P(Eσ ≥ 1/α) ≤ α, +and thus no extraneous stopping criterion can affect the validity of the test, as long as rejection occurs only +if Eσ ≥ 1/α. +The theory and applications of e-processes have seen considerable development in the recent literature +on sequential analysis (along with the more basic notion of e-variables in batched settings). The concept is +attractive thanks to its flexibility and simplicity (despite generalizing nonnegative martingales), but construct- +ing powerful e-processes is partly science and partly art [RGVS22]. In composite testing, e-processes are of +central importance since they do not encounter the same pitfalls as NSMs and NMs, and there do indeed exist +nontrivial e-processes even on classes where no such NSMs exist. Indeed, in some sense, e-processes can be +shown to lie at the very core of sequential composite testing [RLKR22]. +1.3 +Test Using Universal Likelihood Ratios: A simple E-Process +The universal inference strategy [WRB20] gives a simple and generic construction of e-processes when a +maximum likelihood estimate can be easily computed. +To contextualise this approach, we first consider the case of a point null and alternative P ∞ and Q∞. In +this case, classical sequential testing theory posits that the sequential likelihood ratio +Lt = +t� +s=1 +q(Xs) +p(Xs) +yields a valid and powerful test upon thresholding at 1/α. Indeed, under the null, {Lt} is an e-process, since +it is an NM. +Against simple nulls but composite alternatives, likelihood ratios such as the above are typically adjusted +to account for the variety of possible alternatives. One way to do this is to replace the above numerator with an +estimate ˆqs(Xs). Importantly, as long as this ˆqs is nonanticipating, i.e., is Ft−1-measurable (depending only +on the first t − 1 datapoints), the martingale property continues to hold. To highlight this nonanticipation, +we shall denote these estimators as ˆqs−1. A second option is to mix over alternatives, perhaps using some +non-informative “prior”, but we will go with the first option in this paper because we are dealing with a highly +nonparametric alternative (essentially the complement of all log-concave laws, or the unspecified subset of +those against which one may hope to have power) — it is easy to use kernel density estimates for ˆqs, but not +so easy to mix over such a loosely specified nonparametric alternative. +The sequential universal likelihood ratio statistic (ULR) extends the above to composite nulls when a +maximum likelihood estimator (MLE) is computable. +Concretely, the statistic is as follows: let ˆqt−1 be +any predictable probability density, that is ˆqt−1 may be expressed as a function of only {X1, . . . , Xt−1} and +additional independent randomness. As before, we should think of ˆq as trying to estimate the underlying law +p. Let ˆpt be the MLE over the null class L with the data Xt +1, i.e., +ˆpt = arg max +ˆp∈L +� +s≤t +log ˆp(Xs). +Notice that, unlike ˆqt−1, the MLE ˆpt makes use of Xt. The sequential ULR statistic is the process +Rt := +� +s≤t +ˆqs−1(Xs) +ˆpt(Xs) . +5 + +(Of course, if the numerator was simply � +s≤t ˆqt(Xs), where ˆqt is an MLE over a larger class calculated using +{X1, . . . , Xt}, then we would get the usual generalized likelihood ratio process. +However, we will handle +very rich nonparametric alternatives over which computing the MLE is for all practical purposes impossible, +and further, for irregular models like log-concave distributions, such generalized likelihood ratios are very +ill-behaved and not well understood.) +The principal factor underlying the utility of Rt is that it is an e-process. Indeed, for any P ∈ L, and t, +Rt is dominated by Ft(P) = � +s≤t ˆqs−1(Xs)/p(Xs). Further, {Ft(P)} is a P ∞-martingale started at 1, due to +the predictability of ˆq, and thus for any stopping time τ, +EP ∞[Rτ] ≤ EP ∞[Fτ(P)] ≤ 1. +Notice in the argument above that while the e-process is dominated by a P ∞-martingale, it is not itself a +martingale. Indeed, this property is crucial to the existence of nontrivial e-processes even when there are no +such test martingales. We note that this property of domination by a P ∞-NM for every P ∈ L (or in general +P-NM for P ∈ P) is equivalent to the e-process property itself, and can be taken as an alternate definition of +the same [RRLK20, Lem. 6]. +Due to the above observation, the ULR e-process yields a valid test upon thresholding at 1/α. The power +of any such test relies on the two aspects of how well ˆpt and {ˆqs}s≤t estimate the underlying law p. Indeed, +we argue in §4 that if p ̸∈ L, then � +s≤t p(Xs)/ˆpt(Xs) must grow exponentially with t. Thus, as long as +the sequential estimates ˆqt approximate p well in a cumulative regret sense, the procedure above must be +consistent. Concretely, define the regret of prediction using {ˆqt} as +ρt(ˆq; P) := +� +s≤t +(− log ˆqs−1(Xs)) − +� +s≤t +(− log p(Xs)), +so that better estimation results in lower regret, and define the ‘well-estimable’ class +Q(ˆq) := {P : ρt(ˆq; P)/t → 0 P-a.s. as t ր ∞}. +In Section 4 we show the following: +Theorem. (Informal) Let Rt denote the ULR e-process with the sequential estimator E . Then the test that +rejects when Rt ≥ 1/α is α-valid, and consistent against Q(ˆq). +In fact, in §4, we demonstrate a more refined version of the above statement, which allows ρt to grow +linearly, but at a rate bounded by the distance of p from log-concavity. In any case, we comment that the +class Q(·) above is quite rich. For instance, using sieve estimators yields low-regret estimation in the above +log-loss sense for nonparametric classes such as laws on compact intervals with smooth and bounded densities. +The ULR e-process thus gives a powerful test for log-concavity against a rich set of alternates, even though +no test martingale can deliver such properties. Our work thus offers further insight into the sequential testing +of rich composite nulls, and the primacy that e-processes must take in the modern study of the same. +Along with the above asymptotic consistency result, we further derive finite rejection rate bounds by +controlling the typical rejection time of the ULR e-process in terms of the Hellinger distance of the alternaive +law from log-concavity. In particular, we show explicit bounds on typical rejection times against Lipschitz and +bounded laws on the unit box. The above theoretical exploration is augmented with simulation studies on a +simple parametric family comprising a mixture of two Gaussians to empirically evaluate the validity and power +of the test. We find that in small dimensions d ≤ 3, the tests show excellent validity, as well as reasonable +power. We further use this simulation study to highlight the role of the quality of the estimators ˆqt in the +power of the test. +Summary of Contributions +To summarise, this paper is concerned with the theoretical and methodolog- +ical aspects of sequential testing for log-concavity. We first show a negative result that demonstrates that the +approach of constructing test (super)martingales is powerless for testing this class of laws, and along the way +also offering simple characterisations of the fork-convex hull of i.i.d. sequential laws. In the positive direction, +we propose using the Universal Inference based e-process as a way to test log-concavity in the absence of test +martingales. We theoretically demonstrate both the consistency of the resulting sequential test, along with +concrete adaptive bounds on typical rejection time under a wide class of alternatives, and illustrate the same +via simulation studies. +6 + +2 +Definitions, and Background on Log-Concave Distributions +We begin with basic background on log-concave distributions, and necessary notation. We refer the reader to +the survey of Samuard and Wellner for further details [SW14]. +Log-Concave Laws. +A distribution P on (Rd, B(Rd)) is called logarithmically concave (henceforth log- +concave) if for every pair of compact sets A, B and λ ∈ (0, 1), +P(λA + (1 − λ)B) ≥ P(A)λP(B)1−λ, +λA + (1 − λ)B is the Minkowski sum {λx + (1 − λ)y : x ∈ A, y ∈ B}. It is well known that a distribution +that admits a density with respect to the Lebesgue measure is log-concave if and only if P(dx) = eg(x)dx for +a concave function g. Recall that L denotes the class of log-concave distributions with density on Rd, while +L∞ denote the set of i.i.d. sequential laws P ∞ for P ∈ L. +Log-Concave M-projection. +Recall that D denotes the set of laws on (Rd, B(Rd)) that admit densities +with respect to the Lebesgue measure, and that +D1 := {P ∈ D : E[max(0, log p(X))] < ∞, E[∥X∥] < ∞}, +where p(·) is the density of P. For every P ∈ D1, there exists a unique law +LP := arg min +L∈L +KL(P∥L), +where KL(·∥·) is the KL-divergence, called its log-concave M-projection. We shall abuse notation and use Lp +to denote the Lebesgue density of LP (one is admitted as long as P ∈ D1). For a set of points {xt +1}, t ≥ d + 1, +the log-concave maximum likelihood estimator (MLE) is the log-concave M-projection of the empirical law +Pt = � +s≤t δxs/t, denoted ˆPt. Most commonly, we shall refer to its Lebesgue density, ˆpt, which may equivalently +be defined as +ˆpt := +arg max +log f is a concave function +f≥0, +� +f=1 +� +s≤t +log f(xs). +The log-concave MLE has extremely favourable theoretical properties when Xt +1 +i.i.d. +∼ P for some P ∈ D1. For +instance ˆpt → Lp in the strong sense that ∃a > 0 : +� +ea∥x∥|ˆpt(x) − Lp(x)| → 0 almost surely [CS10]. +Locally Absolutely Continuous Sequential Measures. +Let Γ denote the standard Gaussian law on Rd, +γ its density, and let Γ = Γ∞. Notice that Γ is the law of a white noise. A sequential law P is said to be +locally absolutely continuous (l.a.c.) with respect to Γ, denoted P ≪loc. Γ, if for all t, the law of the finite +prefix P|t(·) := P(Xt +1 ∈ ·) is absolutely continuous with respect to Γ|t. Such l.a.c. laws admit a density process, +denoted +ZP +t := dP|t +dΓ|t +. +As an example, if P = P ∞ for some law P with Lebesgue density p, then P ≪loc. Γ, and ZP +t = � +s≤t p(Xs)/γ(Xs). +Of course, we may specify sequential laws (that are ≪loc. Γ) by specifying their density processes. Note that +ZP +t is a likelihood ratio process with respect to Γ, and so is a Γ-martingale. We shall henceforth use Γ as a +reference measure for sequential laws, and almost entirely work under laws that are ≪loc. Γ. +Notice that if ZP +t−1 > 0 then for {Xt} ∼ P, ZP +t /ZP +t−1 is the Γ-conditional density of Xt given Xt−1 +1 +. Further, +ZP +t−1 = 0 =⇒ ZP +t = 0. As a result, we may write for any adapted process {Mt} that +ZP +t−1EP[Mt(Xt +1)|Ft−1] = ZP +t−1EP[Mt(Xt +1)1{ZP +t > 0}|Ft−1] = EΓ[MtZP +t 1{ZP +t > 0}|Ft−1] = EΓ[MtZP +t |Ft−1]. +From this, we observe that a process {Mt} is a P-NSM if and only if {ZP +t Mt} is a Γ-NSM. Indeed, if the +former is true, then we conclude from the above that EΓ[MtZP +t ] ≤ ZP +t−1Mt−1, while if the latter is true, then we +can conclude that ZP +t−1EP[Mt|Ft−1] ≤ ZP +t−1Mt−1 ⇐⇒ 1{ZP +t−1 > 0}EP[Mt|Ft−1] ≤ 1{ZP +t−1 > 0}Mt−1. Since +1{ZP +t−1 > 0} holds P-a.s., it follows that EP[Mt|Ft−1] ≤ Mt−1 P-a.s., and so Mt is a P-NSM. By maintaining +equalities in the above analysis, the analogous statement also holds for NMs. These facts are quite useful in +our later study of fork-convex hulls. +7 + +3 +There Are No Nontrivial Test Supermartingales for Log-concavity +We begin with defining a natural notion of triviality. +Definition An NSM {Mt} is said to be trivial if Γ(∃t : Mt > Mt−1) = 0. An NM {Mt} is said to be trivial if +Γ(∃t : Mt ̸= Mt−1) = 0. +In words, a NSM (NM) is trivial if, almost surely, it is a non-increasing (constant) process. For the remain- +der of this section, we will set {Ft} to be the natural filtration. We recall the notion of test supermartingales +for a class of laws P, which we shall refer to as just nonnegative supermartingales. +Definition For a set sequential laws P, we say that a process {Mt} is a P-NSM if {Mt} is a P-NSM for every +P ∈ P. Similarly, {Mt} is a P-NM is it is a P-NM for every P ∈ P. +With these definitions in hand, we state the main result of this section, the proof of which is left to §3.3. +Theorem 1. There are no nontrivial G∞-NSMs or G∞-NMs under the natural filtration, and a fortiori, there +are no nontrivial L∞-NSMs or L∞-NMs under the natural filtration. +As discussed by Ramdas et al. [RRLK22], the above result implies that any valid level-α sequential test +for log-concavity based on thresholding L∞-NSMs or L∞-NMs must be powerless. Indeed, in the former case, +such a test against any law that is locally absolutely continuous with respect to Γ will almost surely never +exceed its starting value, and thus will almost surely never reject. +Intuition behind the proof. The result arises from a contradiction. To illustrate this, suppose {Mt} is a G∞- +NSM, and that for some time t, given Ft−1, it increases on the event {Xt ∈ O} for some open ball O, i.e., +conditionally on Xt−1 +1 += xt−1 +1 +, {Xt ∈ O} ⊂ {Mt > Mt−1}. Notice that due to nonnegativity, at worst it could +be zero outside the ball. Now, consider a Gaussian GO of such a small variance that GO(O) ≈ 1. By tuning +this variance, we can ensure that Mt > Mt−1 with probability arbitrarily close to 1 given the history, and +since the drop in the Mt remains bounded outside of the ball, this ensures that the conditional expectation of +Mt strictly increases. Since this violates the supermartingale property against G∞ +B ∈ G∞, we must conclude +that no such ball O exists. +Of course, the set on which {Mt} increases need not contain any ball, but still be of nontrivial mass, not +to mention that this set may vary with the history in a complex way. We address such gaps by exploiting the +notion of fork-convexity [RRLK22] which serves as a sequential analogue of convexity especially germane to +(super)martingale properties, and is treated in the following section. In particular, it holds that any process +{Mt} that is a G∞-NSM (or NM) is also a NSM (or NM) with respect to any sequential law in the ‘fork-convex +hull’ of G∞. The main argument then demonstrates that the fork-convex hull of G∞ is incredibly rich, and +contains the laws of arbitrary independent processes with density (i.e., processes of jointly independent {Xt} +such that Xt ∼ pt ∈ D). This large set of laws entirely obstructs the NSM (or NM) property from holding +in any nontrivial manner, essentially using a robust version of the previous intuitive example. Schematically, +we take the following route to establish this result, where the forward direction of each implication exploits +fork-convex combinations (and the reverse is trivial). +G∞-NSM +G∞ +∗ -NSM +( +� G∗)-NSM +( +� G∗)-NSM +( +� D)-NSM +Trivial +⇐⇒ +⇐⇒ +⇐⇒ +⇐⇒ +⇐⇒ +Figure 1: Schematic view of the argument. G∗ is the set of all finite mixtures of Gaussians and G∗ denotes its +L1 closure. For any set P, the class � P consists of independent sequential laws with marginals in P. See +§3.2 for definitions. +3.1 +Fork-convex Combinations +In an algorithmic sense, for two laws P, Q, an α-convex combination R = αP +(1−α)Q is the law of the output +of the following procedure: independently sample U ∼ P and V ∼ Q, and output X = U or V according to +the outcome of an independent α-coin. Fork-convex combinations are the natural sequential extension of such +8 + +a procedure. Concretely, we sample two trajectories {Ut} ∼ P and {Vt} ∼ Q, release Xt = Ut for t ≤ s for +some time s, and then flip a h-coin (where h can depend on the history) to decide whether the subsequent tail +is Xt = Ut or Xt = Vt for t > s. Notice that this is a much richer notion than convex combinations: firstly, +the decision to release Ut or Vt only needs to be made for a tail of the output sequence, and secondly, the +mixture proportion can depend on the history. Formally, this is defined as follows. +Definition ([RRLK22]) Let P, Q ≪loc. Γ be sequential laws. Let s ∈ N, and let h ∈ [0, 1] be an Fs-measurable +random variable such that Γ(h < 1, ZQ +s = 0) = 0. Then the (s, h)-fork-convex combination of P with Q is the +sequential law R with density process +ZR +t := ZP +t 1{t ≤ s} + +� +hZP +t + (1 − h)ZQ +t +ZP +s +ZQs +� +1{t > s}. +We shall denote this succinctly as R = +� +P +s,h +−→ Q +� +. +Notice that fork-convex combinations probabilistically allow single data-dependent change-points, or ‘switches’, +from a law P to Q. The ratio ZP +s/ZQ +s accounts for the fact that the prefix up to time s was drawn according to +P in the case of a switch, and the condition on h ensures that ZQ +s ̸= 0 when we switch to Q (informally meaning +that the initial segment of data was not impossible under Q). +The importance of the above definition lies in the fact that fork-convex combinations preserve (super)martingale +properties. Recall from §2 that {Mt} is a P-NSM if and only if {ZP +t Mt} is a Γ-NSM. Now suppose {Mt} is +both a P-NSM and Q-NSM, let R be a (s, h)-fork-convex combination of P and Q. For t ≥ s + 1, and we have +EΓ[ZR +t Mt|Ft−1] = hEΓ[ZP +t Mt|Ft−1] + (1 − h)ZP +s +ZQs +EΓ[ZQ +t Mt] ≤ hZP +t−1Mt−1 + (1 − h)ZP +s +ZQs +ZQ +t−1Mt−1 = ZR +t−1Mt−1, +where we have utilized the fact that h and Z· +s are Ft−1-measurable. The same calculation is trivial for t ≤ s, +and follows similarly for the martingale property. +The same property extends considerably beyond finite +combinations to closed fork-convex hulls, which generalise the standard notion of closed convex hull of sets. +Definition ([RRLK22]) A set is said to be fork-convex if it contains all fork-convex combinations of its +elements. +Let P be a set of sequential laws that are locally absolutely continuous with respect to Γ. +The +fork-convex hull of P, denoted f-conv(P) is the intersection of all fork-convex sets containing P. The closed +fork-convex hull of P, denoted f-conv(P) is the closure of its fork-convex hull with respect to L1(Γ) convergence +of the likelihood ratio processes at every fixed time t. +Explicitly, the closure in the definition includes all processes Q such that there exists a sequence Qn with +density process {ZQn +t } such that ∀t, ZQn +t +→ ZQ +t in L1(Γ). We shall refer to this as the local L1(Γ) closure. This +closure induces considerable flexibility into closed fork-convex hulls, making the notion a powerful concept +in light of the following phenomenon, observed by Ramdas et al. [RRLK22, Thm. 13] whose argument we +reproduce below. +Proposition 2. For a set of sequential laws P, a process is a P-NSM if and only if it is a f-conv(P)-NSM. +Proof. The result is evident for the fork-convex hull as an extension of the previous two-point calculation. +This extends to closures as follows. Let {Mt} be the process in question, and suppose Pn → P in the sense +above for Pn ∈ f-conv(P). Let Zn +t := ZPn +t +and Zt := ZP +t . We know that for each t, Zn +t → Zt in L1(Γ). We need +to show that ZtMt is a Γ-NSM. To this end, fix a t, and, by passing to a subspace, assume that Zn +t → Zt and +Zn +t−1 → Zt−1 pointwise a.s. Now, since Zn +t Mt is a Γ-martingale, using Fatou’s lemma yields +EΓ[ZtMt|Ft−1] = EΓ[lim inf Zn +t Mt|Ft−1] ≤ lim inf EΓ[Zn +t Mt|Ft−1] ≤ lim inf Zn +t−1Mt−1 = Zt−1Mt−1. +It is worth noting that while the NSM property is preserved under closures above, the same is not necessarily +true of the martingale property due to the use of Fatou’s Lemma when handling closures in the above proof. +Nevertheless, the NM (and indeed the martingale property without appeal to non-negativity) persists under +fork-convex hulls, without the closure, giving us the following characterisation. +Proposition 3. For a set of sequential laws P, a process is a P-NM if and only if it is a f-conv(P)-NM. +9 + +3.2 +The Fork-Convex Hull of Independent Sequential Laws +Proposition 2 gives us a concrete attack to showing the triviality of G∞-NSMs: we shall show that the fork- +convex hull of this set is far too rich to allow the existence of nontrivial NSMs. The bulk of our argument +develops simple structural characterisations of fork-convex hulls of independent sequential laws. This section +describes this characterisation using three properties, whose proof we leave to §A.2. We begin with a key +definition that sets notation for ‘independent sequential laws’ from a set. +Definition Let P be a set of distribution on Rd. For a sequence of distributions {Pt}t∈N, we define �{Pt} +as the sequential distribution of a stochastic process {Xt}t∈N such that all Xt are jointly independent, and for +each t ∈ N, Xt ∼ Pt. We further define � P := {�{Pt} : Pt ∈ P ∀t}, i.e. the set of laws of independent +stochastic processes with laws at each time lying in P. +Note that � P is a much richer set than the i.i.d. sequential laws, which we denote P∞ := {P ∞ : P ∈ P}. +In light of this, the following result demonstrates the richness of fork-convex hulls. Recall that a set of laws is +mutually absolutely continuous (m.a.c.) if every pair of laws contained in it is mutually absolutely continuous. +Lemma 4. Let P ⊂ D be a m.a.c. set of laws with density on Rd. Then, f-conv(P∞) ⊃ � P ⊃ P∞. +To sketch the argument underlying the above, fix any P = �{Pt}. It suffices to demonstrate a sequence of +laws {RT}T ∈N, each generated by finite fork-convex combinations of P∞-laws (and their fork-convex combina- +tions) such that for t ≤ T, the density process of RT and P agree. The conclusion then follows under closure, +since RT → P in the appropriate sense. The concrete witness for the above Lemma is the following sequence +R1 := P ∞ +1 , RT = +� +RT −1 T −1,0 +−→ P ∞ +T +� +, +where each fork-convex combination is valid since P is m.a.c. In essence, this exploits the fact that fork-convex +combinations let one switch between laws after a time of our choosing. See A.2 for details. +Next, we exploit the convex combination properties of fork-convex combinations to demonstrate that fork- +convex combinations of i.i.d. laws includes i.i.d. products over mixtures as well. To this end, let us define the +mixture classes as below. +Definition Let P be a set of distributions on Rd. For k ∈ N, we let Pk be the class of laws formed by k-fold +mixtures of laws in P, and denote P∗ = � +k∈N Pk as the class of laws formed by finite mixtures of laws in P. +Note that P∗ is well defined since Pk form an increasing set. The second key result shown in §A.2 is +Lemma 5. Let P ⊂ D be a m.a.c. set of laws on Rd. Then f-conv(P∞) ⊃ P∞ +∗ . +The key observation underlying the above is already demonstrated in showing that f-conv(P∞) ⊃ P∞ +2 . To +see this, fix any P, Q ∈ P, and α ∈ [0, 1]. We need to demonstrate a sequence of laws RT constructed via +repeated fork-convex combinations that match the density process of R := (αP + (1 − α)Q)∞ for times up to +T . This is realised as follows: +R0 := P ∞, ST := +� +RT −1 T −1,α +−→ Q∞� +, RT := +� +ST +T,0 +−→ P ∞� +. +In the above, ST matches the density process of R up to time T by mixing between RT −1 (whose tail behaves +as P ∞) and Q∞ appropriately. RT then switches the tail of ST to behave as P ∞ to enable the recursion. This +argument extends to P∞ +k +for any arbitrary k by inducting over k (which is possible since a member of Pk is a +mixture of a Pk−1 law and a P law). Since k is arbitrary, this immediately extends to P∗. +Finally, we exploit the closure properties of fork-convex hulls under L1(Γ) to extend fork-convex hulls from +product measures over a set to product measures over closures of that set. +Lemma 6. Let P be a set of distributions on Rd that have densities. Then f-conv(� P) ⊃ � P, where P is +the L1(Γ)-closure of P. +The above lemma is a straightforward consequence of the closure properties as detailed in §A.2. +10 + +3.3 +Proof of the Absence of Nontrivial Test Martingales +The previous section demonstrates that taking closed fork-convex hulls can yield significant expansion to +product laws over sequences. This section exploits these properties to demonstrate the triviality of G∞-NSMs. +The key observation underlying this is the following standard fact about the richness of Gaussian mixtures. +Recall that P denotes the L1(Γ)-closure of P. +Lemma 7. G∗ is L1(Γ)-dense in the set of all distributions with densities, i.e., G∗ = D. +The L1(Leb)-denseness of mixtures of Gaussians in D is a classical fact; for instance see the work of +Alspach and Sorenson [AS72] or Lo [Lo72]. More recently, a considerably more robust result was presented by +Bacharoglu [Bac10], who shows that Gaussian mixtures are dense in nonnegative simple functions in both an +L1 and an L∞ sense. This also suffices for our purposes since nonnegative simple functions are themselves L1- +dense in nonnegative integrable functions. The L1(Γ)-denseness follows since Γ admits a uniformly bounded +density with respect to the Lebesgue measure. +We note that our argument extends to any such set, i.e., to any P such that P = D. The Gaussians serve +as a convenient witness within L for which this property holds. With this in hand, we proceed as below. +Proof of Theorem 1. Let {Mt} be a G∞-NSM. First observe by Lemma 5 and Proposition 2 that as a conse- +quence, {Mt} is also a G∞ +∗ -NSM. Next, by Lemma 4 and Proposition 2, it is further a (� G∗)-NSM. Similarly, +by Lemma 6 and Proposition 2, we conclude that {Mt} is also a (� G∗)-NSM. Finally, by Lemma 7, we +conclude that {Mt} is a (� D)-NSM.1 +We now argue that � D is too rich to admit nontrivial NSMs. The argument is by contradiction—we +assume that Mt > Mt−1 for some t with nontrivial probability, and use this to construct a law in � D that +violates the NSM property. The argument repeatedly exploits the topological equivalence of (Rd)t and Rdt +under the product and metric topologies respectively. We shall denote the Lebesgue measure in m dimensions +as Lebm, and we note that the product Lebesgue measure on (Rd)t is identical to Lebdt, and use the latter to +denote the former. +Let us proceed with the argument. For a natural number t, define the event At := {Mt > Mt−1, Mt−1 < ∞}, +i.e. that {Mt} increases at time t. It suffices to argue that no matter the t, the mass of At is zero, since +Γ(Mt−1 = ∞) must be zero due to integrability of Mt−1. For the sake of contractiction, assume Γ(At) > 0. For +n ∈ N, define the approximations An +t := {Mt ≥ Mt−1 + 1/n, Mt−1 ≤ n}. The An +t form an increasing sequence +of sets, and converge to {Mt > Mt−1, Mt−1 < ∞} = At. +Now, since Γ(At) > 0 and At ∈ Ft, we conclude that Lebdt(At) > 0 due to the mutual absolute continutity of +Gaussians and Lebesgue measures on Euclidean spaces. Without loss of generality, we may assume Lebdt(At) < +∞ (since otherwise we may pass to a subset of At such that of positive and finite mass, using sigma-finiteness +of the Lebesgue measure, and run the argument on this subset). Since An +t ր At, we have by regularity of +measure that Lebdt(An +t ) → Lebdt(At), and in particular there exists an n such that Lebdt(An +t ) ∈ (0, ∞). Fix +such an n for the remainder of the argument. +Recall that an open rectangle in Rm is a Cartesian product of open intervals, i.e. +a set of the form +× +m +i=1(ai, bi) for ai < bi. Similarly, we say that R is an open rectangle in (Rd)t if there exist open Rd-rectangles +S1 . . . St such that R =× +t +s=1 Ss. The following statement is a consequence of basic topological and measure +theoretic properties of Euclidean spaces, which we prove in §A.3. +Lemma 8. Let E ⊂ (Rd)t be such that Lebdt(E) > 0. For every natural m ∈ N, there exists an open rectangle +R in (Rd)t such that +Lebdt(R) > 0 +and +Lebdt(R ∩ E) ≥ +m +m + 1Lebdt(R). +Exploiting the above result, we may construct a sequence of rectangles in (Rd)t, {Rm}m∈N each of positive +mass such that +Lebdt(An +t ∩ Rm) +Lebdt(Rm) +≥ +m +m + 1. +1We can also argue this more directly: +observe that taking closed fork-convex hull is an idempotent operation, i.e. +f-conv(f-conv(P)) = f-conv(P) (which follows from the facts that closed fork-convex hulls are fork-convex, and that closures +of closed sets are invariant). Therefore, using the chain of Lemmata of §3.2, f-conv(G∞) ⊃ � G∗, and so {Mt} is a (� D)-NSM. +11 + +Now, since each Rm is a rectangle, there exists a law Dm ∈ � D such that the prefix restriction Dm|t = +Unif(Rm). Indeed, if Rm =× +t +s=1 Sm +s , then Dm = �{Dm +s }, where Dm +s = Unif(Sm +s ) for s ≤ t, and Dm +s = Γ for +s > t. We claim that for large m, Dm witness a violation of the NSM property for {Mt}. We demonstrate this +using the process {Nt} := {min(Mt, n + 1)}. +Notice that if {Mt} is a P-NSM, then so is {Nt}, since +E[Nt|Ft−1] ≤ min(E[Mt|Ft−1], E[n + 1|Ft−1]) = min(Mt−1, n + 1) = Nt−1, +and the nonnegativity follows since both Mt and n + 1 are nonnegative. Further, since Mt−1 ≤ n on An +t , it +follows that Nt ≥ Nt−1 + 1/n on An +t as well, since n + 1/n ≤ n + 1. +Consequently, we have +EDm[Nt] ≥ EDm +� +(Nt−1 + 1/n)1{Xt +1 ∈ An +t } +� ++ 0 += EDm +� +Nt−11{Xt +1 ∈ An +t } +� ++ Dm(An +t ) +n +≥ EDm +� +Nt−11{Xt +1 ∈ An +t } +� ++ +m +n(m + 1), +where the final inequality exploits the fact that at least a m/(m + 1) fraction of the mass of Rm lies in An +t , +and we have used the nonnegativity of Nt. +However, since Nt−1 is upper bounded by n + 1, we observe that +0 ≤ EDm[Nt−11{Xt +1 ∈ (An +t )c}] ≤ (n + 1)Dm((An +t )c) ≤ n + 1 +m + 1, +and so +EDm[Nt−11{Xt +1 ∈ An +t }] = EDm[Nt−1] − EDm[Nt−11{Xt +1 ∈ (An +t )c] ≥ EDm[Nt−1] − +n + 1 +(m + 1). +But now, we conclude that +EDm[Nt] ≥ EDm[Nt−1] + (m/n) − (n + 1) +m + 1 +. +Choosing m > 3n2, and exploiting n ≥ 1, this implies that EDm[Nt] > EDm[Nt−1], thus contradicting the su- +permartingale property of {Nt} under Dm (since supermartingales must have non-increasing mean sequences). +We conclude that it cannot hold that Γ(At) > 0, i.e., Mt ≤ Mt−1 Γ-almost surely. +But, since t is arbitrary, we immediately conclude that +Γ(∃t ≥ 2 : Mt > Mt−1) ≤ +� +t≥2 +Γ(Mt > Mt−1) = 0. +The argument for NMs follows from this as well. If {Mt} is a � D-NM, then it is also an NSM, and thus +almost surely does not increase. But this means that M1 − Mt ≥ 0 is also a nonnegative supermartingale, and +therefore does not increase, which implies that Mt also does not decrease almost surely. +Remark. It may be possible to develop a different argument that does not explicitly need to pass through +the notion of fork-convex hulls. Perhaps one could directly work with the An +s above, and replace Dm a by +sufficiently skinny Gaussian Gm such that Gm(An +s ∩ R) ≈ Gm(R) ≈ 1. However, there would still be sufficiently +many technical details to iron out, so such an approach is not necessarily shorter or cleaner. More importantly +however, our chosen path of development above leads to a richer characterisation of fork-convex hulls of +i.i.d. processes with densities, and further directly illustrates the utility of such a characterisation. It thus +deepens our understanding of the important geometric concept of fork-convexity. +4 +The Sequential Universal Likelihood Ratio E-Process +We begin by recalling the definition of e-processes from the introduction. +12 + +Definition An {Ft}-adapted process {Et} is said to be an e-process for a set of sequential laws P if +sup +P∈P +sup +τ E[Eτ] ≤ 1, +where the second supremum is over all stopping times. Further, if for some n ≥ 1 it holds a.s. with respect to +all P ∈ P that E1 = E2 = · · · = En−1 = 1, then we say that Et is an e-process for P started at time n. +Next, we define the universal likelihood ratio (ULR) process [WRB20], which forms the main object of +interest for this section. +Definition Let E denote a sequence of estimators {Et}t≥0 such that each Et : (Rd)t → D. At any t, denote ˆqt = +Et(X1, . . . , Xt). Finally, let ˆpt denote the log-concave maximum likelihood estimate over the data X1, . . . , Xt +(which exists if t > d). The ULR process is the statistic +Rt(Xt +1; E ) := 1{t ≤ d} + 1{t > d} +� +d+1≤s≤t +ˆqs−1(Xs) +ˆpt(Xs) . +We shall often suppress the dependence of Rt on Xt +1 and E . The initial setting of Rt = 1 for t ≤ d is to +account for the fact that log-concave MLEs are known to exist only if at least d + 1 samples are available. +As discussed in the introduction, {Rt} constitutes an e-process due to the predictability of ˆqt−1 and the +fact that they are probability densities. We formally state the validity of Rt as a proposition. +Proposition 9. For any E , the process {Rt} is an e-process for L∞ started at time d + 1. Consequently, +rejecting the null hypothesis when Rt ≥ 1/α results in an α-valid test for log-concavity. +On exact MLEs. The measures ˆpt need not exactly maximise the likelihood ratio in the above. Indeed, if +instead of the exact log-concave MLE ˆpt we instead an estimate ˜pt such that +� +s≤t +log ˜pt(Xs) ≥ − log(1/ε) + +� +s≤t +log ˆpt(Xs), +then εRt ·� ˆpt(Xs) +˜pt(Xs) is an e-process, and this can be thresholded at 1/α as before. This observation is pertinent +since practical procedures for computing the log-concave MLE of a dataset are inexact, and only approximate +the solution up to a (user-specified) additive gap in the log-likelihood objective, and require computation that +scales polynomially with the inverse of this additive gap. +For the remainder of this section, we shall equate laws P ∈ D1 with their density, denoted p. +4.1 +Consistency of the ULR E-Process for Testing Log-Concavity +Consistency of the ULR e-process depends strongly on the underlying estimator E . Indeed, as an extreme +example, consider the case of ˆqt(Xt) = 1{Xt = X1}, for which the resulting Rt is a.s. 0 for any time t ≥ d + 1 +so long as the law P is continuous, and the test is thus powerless against such laws. +It thus follows that the ULR e-process can only yield power against a set of laws determined by the +estimator E . Concretely, we shall argue the same against the following set of ‘well estimable’ laws. Below, dH +below denotes the Hellinger distance. +Definition For a sequential estimator E , and a density p ∈ D1, define the prediction regret for a sequence +{Xt} as +ρt(E ; p) := +� +s≤t +log p(Xs) − log ˆqs−1(Xs). +Further, let Lp denote the log-concave M-projection of p. We define the class of distributions that are well +estimable by E with respect to log-concavity as +Q(E ; c) := +� +p ∈ D1 : P ∞ +� +lim sup +t→∞ +ρt(E ; p) +td2 +H(p, Lp) ≤ c +� += 1 +� +. +13 + +The main result of this section is that the ULR-based test is powerful against the above well-estimable +laws, which is shown later in this section. +Theorem 10. There exists a constant c > 1/25 such that if p ∈ Q(E ; c) \ L, then P ∞(Rt → ∞) = 1. +Consequently, the ULR e-process yields a consistent test against i.i.d. draws from any distribution in Q(E ; c). +The well-estimability condition above essentially requires that the distribution can be estimated well in a +log-loss sense. For i.i.d. distributions, one expects that for reasonable E , the estimates ˆqt converge to some ˆq, +and thus the regret grows for large t as ρt ≈ tKL(p∥ˆq) (which could grow sublinearly in t if KL(p∥ˆqt) → 0, but +the latter convergence is not required). The class Q thus roughly consists of distributions can be estimated well +in KL divergence. Such estimation can be a challenging task in complete generality, since the KL divergence is +quite sensitive to mismatch in the tails of distributions. However, under mild restrictions such as compactness +of support and smoothness, such estimability is quite forthcoming. Indeed, we give the following statement to +illustrate this point. This is proved in §B.3. +Corollary 11. Let DBox,Lip,B denote the set of 1-Lipschitz densities supported on the unit box [−1, 1]d and +bounded between [1/B, B]. There exists a sequence of sieve maximum likelihood estimators E such that for +every c > 0, DBox,Lip,B ⊂ Q(E ; c), i.e., the ULR e-process yields a consistent test against i.i.d. draws from +such distributions. +It is further interesting that the consistency of the test does not require that the regret ρt/t → 0, and +only that it gets small enough relative to the Hellinger distance between p and its log-concave M-projection +Lp. This signals that deviations from log-concavity may be detected far before the underlying law can be +estimated, which is quite favourable theoretically, although its practical effects depend significantly on how +large a c can be taken in Theorem 10. +Proof of Theorem 10. We begin by defining σt(p) = � +s≤t log p(Xs) − log ˆpt(Xs). Observe that +log Rt = σt(p) − ρt(E ; P). +Further, by assumption, we have that p ∈ Q(E , c) for some c, and thus for any ζ > 0, we have that +ρt ≤ (1 + ζ)ctd2 +H(p, Lp), +for large enough t. Consequently, to show that Rt → ∞, it suffices to show that P ∞ almost surely, +lim inf +t→∞ +σt +td2 +H(p, Lp) ≥ (1 + 2ζ)c. +(3) +It is at this point that the following lemma is useful, the proof of which is left to §B.1. +Lemma 12. For any p ∈ D1, it holds that +P ∞ +� +lim inf +t→∞ +σt(P) +td2 +H(P, LP ) ≥ 1 +25 +� += 1. +The claim (3) thus follows so long as (1 + 2ζ)c ≤ 1/25, and since ζ > 0 can be taken arbitrarily small, +this allows us to take any c < 1/25. We note that the constants in this argument are loose, and informal +calculations suggest that it may be possible to improve c up to about 1/6. +The proof of Lemma 12 relies on strong convergence properties of the log-concave MLE ˆpt to the log-concave +M-projection Lp. Recall that for a pair of functions u ≤ v, a bracket [u, v] is the set of all functions that lie +between u and v everywhere. By exploiting a characterisation of the convergence properties of log-concave +MLEs due to Cule and Samworth [CS10], Dunn et al. [DGWR21, Lem. 1] show that there is a small bracket +that is well separated from P such that ˆpt eventually lies in this bracket. Conditioning on this event, we +then exploit a classical result of Wong and Shen [WS95] which show linear growth of σt(p) with condiitonal +probability at least 1−exp (−Ω(t)) , at which point the lemma follows by Borel-Cantelli. As mentioned before, +see §B.1 for the full proof of Lemma 12. +14 + +4.2 +Power of the ULR E-Process for Testing Log-Concavity +The argument underlying Theorem 10 is also amenable to deriving rates, under further restrictions on the +underlying law P. As in the previous section, we argue this using the decomposition log Rt = σt(p) − ρt(E ; p). +4.2.1 +Challenges, and Context from the Theory of Log-Concave MLEs +With the above approach, the argument breaks into two parts. Frstly, we assume that we use a good enough +estimator E so that ρt is not too large with high probability. Such an assumption is necessary for the approach +we take, although in principle the test can be analysed using a different decomposition, in which case this +assumption may perhaps be weakened. In any case, we observe that for concrete alternate hypotheses such as +laws with Lipschitz densities supported on the unit hypercube, ρt can indeed be appropriately controlled. It +is worth noting, however, that the resulting rate bounds are strongly driven by the behaviour of ρt, and thus +the estimator being considered, which limits the power of the results to follow. +The second part of the argument requires us to show that σt is large, i.e., to argue that the log-concave +MLE cannot represent the underlying law very well when it is not log-concave. While a natural statement, +arguing this is challenging because this requires us to understand the behaviour of log-concave MLEs ‘off-the- +model,’ i.e., when the data is not drawn from a log-concave distribution itself. With the notable exception of +Barber and Samworth [BS21], this task has not been undertaken in the literature, with most works focusing +on on-the-model minimax rate bounds [KS16; KDR19; Han21; CDSS18]. Let us consider this in some detail. +Tight analysis of the on-the-model log-concave estimation problem fundamentally relies on a subtle re- +duction of the rates of log-concave MLEs to the problem of controlling deviations of empirical processes over +convex sets, i.e., to that of controlling supC |P(C) − Pt(C)| under data drawn from P, where Pt is the empir- +ical law, and the supremum is over convex sets in a bounded domain [CDSS18]. Using this observation and a +refined study of these deviations, Kur et al. [KDR19] recently showed tight on-the-model estimation rates of +the form dH(ˆpt, p) = O(n−1/(d+1)) when p ∈ L and d ≥ 3 (Han showed similar results, along with extensions +to s-concave densities [Han21]). While significant elements of this study can be extended to analysing off- +the-model behavior, the analysis ultimately cannot be applied to our situation. The gap arises because their +argument only upper bounds the quantity +θt := EX∼P [log Lp(X)] − EX∼P [log ˜pt(X)], +where for a small constant c, ˜pt ∝ max(c, pt) is a slight modification the log-concave MLE. When p = Lp, this +object is a KL-divergence, and so is lower bounded. However, when p ̸= Lp (that is, P is not log-concave), +this quantity is may well be negative. +Notice that this is a problem for us precisely because E[σt]/t ≈ +θt + EX∼p[log p(X) − log Lp(X)]. When p ̸∈ L, the second term can indeed be shown to be large, but the lack +of a lower bound on the first term limits the applicability of such results. We also note that other aspects +of the argument, which are relatively simple in on-the-model analysis (for instance, arguing that the mass p +places on sets of the form {Lp(x) < γ} is small), are also rendered inoperative in off-the-model analysis. +Of course, we can in principle exploit the results of Barber and Samworth instead. However, these re- +sults give quite poor rates. Roughly speaking, Theorem 5 of their paper [BS21] shows that off-the-model, +dH(ˆpt, Lp) ≲ t−1/4d, and thus any analysis that exploits this result cannot hope to show that σt is large for +t ≪ dH(p, L)−4d. This power of 4d arises since the analysis of [BS21] passes through a reduction to convergence +of empirical laws in Wasserstein distance (which gives the relatively benign factor of d), and further suffers a +1/4th power slowdown relative to this convergence (which is both unavoidable, and leads to a 4d exponent). +Our analysis sidesteps these issues by controlling the growth of σt on the basis of bracketing entropy (see +§B.1) bounds for the class of bounded log-concave laws on compact supports. Our bound below holds for all +d but appears to be new for d ≥ 4. Indeed, we show the following statement. +Lemma 13. Let Ld,B denote the set of laws with log-concave densities that supported on [−1, 1]d and uniformly +upper bounded by a constant B. There exists a constant Cd dependending only on the dimension such that +H[](Ld,B, ζ) = Cd �Θ +� +(B/ζ)max(d/2,d−1)� +, +where the �Θ hides terms that scale polylogarithmically with ζ or B. +15 + +We note that the lower bounds on the bracketing entropy implicit in the statement of Lemma 13 were +already shown by Kim & Samworth [KS16, Thm. 8], who further also showed the corresponding upper bounds +for d ≤ 3. While not the central point of the paper, we develop upper bounds for the same when d ≥ 4. +There are two salient technical points regarding the bound above. Firstly, observe that for d ≥ 4, the entropic +bounds lie in the non-Donsker regime, i.e., when the Dudley integral +� ε +0 +� +H[](Ld,B, ζdζ does not converge due +to a blow-up near ζ = 0, which typically (but not always) represents a slowdown in the convergence rates that +can be shown via entropy integrals. Secondly, for d ≥ 3, the bound grows as ζ−(d−1) rather than as ζ−d/2. +The latter quantity is pertinent it is close to the growth rate of the bracketing entropy of convex sets which +is ζ−(d−1)/2; see §B.2.3. This fact underlies the power of the previously discussed reduction of the analysis +of log-concave MLE rates to control on the deviations of empirical processes over convex sets, which admit a +slower entropy growth. +Lemma 13 is proved in §B.2.3. The growth bounds of this result ensure that for t ≳ dH(p, L)2(d−1), σt is +linearly large in t, even when the underlying law is not log-concave. This 2(d−1) exponent should be compared +to the aforementioned 4dth power scaling that one expects to emerge from using the Wasserstein continuity +based approach discussed above. Of course, the dependence on d could potentially be improved even further. +For instance, if the on-the-model analysis can indeed be extended to off-the-model, it is plausible to expect +dependence of the form d + 1 instead of 2d − 2. However, this remains a challenging problem for future work. +It is worth noting that while the techniques for the bounds in Lemma 13 exist in the literature, the +bounds themselves appear to not have explicitly been observed. We believe that this might be because it was +previously observed that due to existing lower bounds on this entropy, the resulting growth rate bounds that +emerged from entropic considerations could not be optimal for the rate analysis of the log-concave MLE, at +least in on-the-model settings. It should also be noted that the bounds above are explicitly for compactly +supported log-concave laws (which is a restriction, but a relatively mild one, due to the exponentially decaying +tail enjoyed by all log-concave densities). Further note that that the brackets we construct for this setting +are ‘improper’, i.e. the bracketing functions are themselves not log-concave, which may limit utility in direct +analysis of the difference between ˆpt and Lp, but is good enough when studying the behaviour of σt. +4.2.2 +Bounding Typical Rejection Times for the ULR Test +As discussed previously, our analysis of σt passes through a bracketing entropy bound for bounded, compactly +supported log-concave laws. For such bounds to be effective, we need to ensure that the log-concave MLE ˆpt +itself is bounded. This is enabled by the quantity ∆P , defined for a law P as +∆P := +min +v:∥v∥=1 EP [|⟨v, X − EP [X]⟩|], +which was identified by Barber and Samworth [BS21] as a means to lower bound the covariance of the log- +concave projection of P, which in turn can be exploited to upper bound the supremum of Lp and (indirectly) +ˆpt. Observe that ∆P roughly corresponds to the minimum eigenvalue of the covariance matrix of P — indeed, +it is best seen as a robust version of the same. +With this in hand, we are ready to state our main result, the proof of which is the subject of §B.2.2. Recall +that dH(p, L) = infq∈L dH(p, q) and τα := inf{t : Rt ≥ 1/α} is the rejection time. +Theorem 14. Suppose p is supported on [−1, 1]d, and let πt → 0 be a sequence such that for every t, +P ∞ +� ρt(E , p) +td2 +H(p, L) ≥ 1 +25 +� +≤ πt. +Then there exists a constant c ∈ [1/600, 1] and a natural number T0 such that for any t ≥ T0 + log(1/α) +cd2 +H(p,L), +P ∞ (τα > t) ≤ πt + 1 +c exp +� +−ctd2 +H(p, L) +� ++ 1 +c exp +� +−ct∆2 +P /d2� +, +and +T0 = Cd · �O +� +∆− max(d2/2,d2−d) +P +dH(p, L)− max((d+4)/2,2(d−1)) + d2∆−2 +P +� +, +where Cd depends only on d, and the �O hides terms depending polylogarithmically on dH(p, L), ∆P . +16 + +Observe that from the statement above we may conclude that the average rejection time is bounded as +E[τα] = +� +t +P ∞(τα > t) ≤ +� +t +πt + O(T0). +Here, the first term is driven by the predictability of p using the estimators E , while the second term is driven +by our analysis of the noise scale of log-concave density estimation in off-the-model scenarios. +In typical situations, the former of these terms will dominate the resulting bounds, since typical alternate +classes will be much larger than the class of log-concave distributions. For instance, using results on the +estimation of uniformly lower-bounded Lipschitz densities [WS95, e.g.], we show the following result about +the set DBox,Lip,B introduced in Corollary 11. +Corollary 15. For any constant B > 0, there exists a sequence of sieve maximum likelihood estimators E such +that if p ∈ DBox,Lip,B, the ULRT rejection time τα is bounded in expectation as E[τα] = �O(dH(p, L)−2(d+3)). +The proof is in §B.3. We remark that the above rates adapt to the extent to which the underlying law +p violates log-concavity in the sense that the time-scales of rejection are driven by dH(p, L). Indeed, this +represents an important advantage of sequential tests as opposed to batched tests, in that validity is retained, +and detection is guaranteed at an adapted time-scale. +On tightness. We note that the exponents of Theorem 14, and in particular, Corollary 15 are likely loose +for the problem of testing log-concavity. This is an artefact of the analysis; for instance, the slow rate in +Corollary 15 is largely determined by the rate requirements for estimating Lipschitz laws on the unit box, +which arises due to the πt terms present in Theorem 14. It is possible that this aspect can be improved, since +nothing neccessitates that we use an estimator that captures the underlying density p well. +Indeed, instead of analysing the prediction regret with respect to p itself, we could decompose R = σt(q) − +ρt(E ; q) for some other law q, perhaps lying in a smaller class of densities Q than those possible for p. As long +as (i) E does as good a job at prediction under the log-loss as any law in Q, and (ii) no matter what p ̸∈ L +is, there is a law in Q that is ‘closer to’ p than any law in L, a similar analysis should be possible, although +this requires possibly subtle off-model control on the behaviour of E , as well as a careful choice of Q itself +to control the relative values of distances such as dH(p, Q) and dH(p, L). One such approach which appears +promising for log-concave laws is to exploit s-concave densities to play the role of Q, which are particularly +attractive since they form a rich extension of the class of log-concave laws, but nevertheless enjoy identical +minimax MLE convergence rates as them [HW16; Han21]. +5 +Algorithmic Proposal, and Simulation Study +We now proceed to algorithmically describe the ULR e-process based test for log-concavity, and investigate +the behaviour of a concrete implementation of the same on a simple parametric family. +5.1 +Computational Aspects, Batching, and a Concrete Testing Algorithm +Under specification of the sequential estimators E , and a method for fitting the log-concave MLE, the statistic +Rt is explicitly computable, and thus naturally leads to implementations. While the e-process is powerful +against wide classes of alternatives, its implementation suffers from a fundamental computational issue, that +arises due to the recomputation of ˆqt and ˆpt in each round. This cost grows superlinearly with t since since the +entire denominator � ˆpt(Xs) must be evaluated on the entirety of the stream, and the cost of estimating this +ˆpt is itself superlinear in the number of samples t. A second issue arises upon increasing the data dimensions d, +since computational costs of estimating ˆpt grow quite fast with this. Even though polynomial in d algorithms +exist for computing the log-concave MLE [Axe+19], the fastest available method for this is typically hundreds +of times slower when processing ∼ 100 points in even the modest d = 5 when compared to the time needed +to process the same sized dataset for d = 1 [RS19]. We address this issue by exploiting batching to reduce +the computational load, which makes computations viable in the moderate d ≤ 4. The idea is to wait to +17 + +accumulate I > 1 fresh samples before recomputing Rt, rather than updating it at every round.2 +Let us point out that such batched updates still retain the e-process property, and thus validity, as long +as the ˆqt−1 remain nonanticipating over the entire batch. Concretely, we may set a schedule, captured by an +increasing sequence of times T = {tk}, and evaluate the statistic +Rt(T ) = Rt−1(T )1{t ̸∈ T } + +� +1{t ∈ T } +� +j≤k(t) +tj +� +s=tj−1 +ˆqtj−1(Xs) +ˆptk(Xs) , +where k(t) = max{k : tk ≤ t}. In words, the schedule divides streams into a sequence of batches of size +tk − tk−1, and each time a new batch is accumulated, we evaluate a new estimate ˆq on the previous batches, +and re-evaluate the log-concave MLE on the entirety of the data seen. This process continues to be dominated +by a batched version of Ft(P), which retains the martingale property under P ∞, thus yielding validity. The +simplest viable schedule is to set tk = kI for a constant ‘batching interval’ I. This effectively boils down to +testing the log-concavity of p⊗I, which is valid since tensor products of log-concave laws remain log-concave. +Notice that such batching may result in a reduction in power. For instance, rejection can only occur at the +time tk, and further the statistic may be deflated because data points with a large signal may be ‘washed-out’ +due to milder behaviour across the remainder of the batch. Nevertheless, we find in simulation studies that +this drop in power is nominal, and comes at the cost of a significant improvement in runtime. +With this in hand, we can provide an explicit algorithmic description of our test below. +Algorithm 1 Log-Concave Universal Likelihood Ratio Test +1: Input: Batching schedule {tk}∞ +k=1 with t1 ≥ d + 1, estimator E , level α. +2: Initialise: Rt ← 1 for t ≤ t1, K ← 1, N1 = 1, t ← 1. +3: while Rt < 1/α do +4: +if t = tK then +5: +ˆq = E (Xt−tK−1 +1 +). +6: +Nt ← Nt−1 · �tK +s=tK−1+1 ˆq(Xs). +7: +ˆp ← L (XtK +1 ). +8: +Rt ← Nt · +��tK +s=1 ˆp(Xs) +�−1 +. +9: +K ← K + 1. +10: +else +11: +Rt ← Rt−1. +12: +Nt ← Nt−1. +13: +t ← t + 1. +5.2 +Evaluating the ULR E-Process Test +We investigate the behaviour of the test of Algorithm 1 on the following simple test-bed family of laws, where +ed = 1d/ +√ +d is the unit vector along the all-ones direction in Rd,. +p(x; µ, d) := +1 +2(2π)d/2 +� +exp +� +−∥x − µ +2 ed∥2/2 +� ++ exp +� +−∥x + µ +2 ed∥2/2 +�� +, +i.e., balanced two component Gaussian mixture laws with means ± µ +2 ed and identity covariance. The norm +of the mean-difference is precisely µ, which we assume without loss of generality to be nonnegative. A small +modification of this family of laws was also used as a test-bed for the non-sequential test proposed by Dunn +et al. [DGWR21]. +These laws are extremely convenient for proof-of-concept investigation of tests of log-concavity. Indeed, +observe that up to a rotation, the d-dimensional law is a tensorisation of the one-dimensional law with a +2Note of course, that for our simulation study, the repetition of simulations required to study power and size mean that we +only implement our fully nonparametric test for up to d = 4. Nevertheless, even this is reasonable to run for d = 6, wherein a +single run over a horizon of 100 steps takes about 20s. +18 + +log-concave law (specifically a standard Gaussian in d − 1-dimensions). Since log-concavity properties are +invariant under rotations, and since the log-concave M-projection of product laws is a product of the marginal +log-concave M-projections [SW14], this gives a very simple characterisation of the log-concavity properties of +this law. Concretely, the distance from log-concavity is purely a function of the norm of the mean-difference +µ, and p(x; µ, d) is log-concave if and only if µ > 2. These laws thus give us a simple way to check both the +size and power of the test statistics, as well as study the effect of increase in dimension on the power. +Finally, we give details of the simulations. All data reported is a mean over 100 runs of each experiment. +All simulations are run up to 100 time steps, which is mainly for computational practicality. Note thus that +our size estimates are systematically lower than the true size (with infinte horizon). We run the case of d = 1, +which is computationally the cheapest, over the longer horizon of 500 time steps to illustrate that not much +changes in this case, at least as regards the empirical validity of our test. For d = 1, 2, 3, the tests are batched +at an interval of I = 20, while for computational practicality the test is batched at I = 25 for d = 4. These +are significant fractions of the time horizon studied, but do not significantly lower power, at least for d = 1, +as demonstrated by explicit simulation. +All code is implemented in R. The nonparametric estimator used for E is the kernel density estimate +as implemented in the ks package [CD18], and the log-concave MLE used is either from the logConDens +package [DR11] for d = 1 or the fmlcd package (d = 2, 3, 4) [RS19]. We note that the latter is not guaranteed +to return the log-concave MLE since it optimises a non-convex approximation to the program defining the +same. However, we find that compared to alternatives like the logConcDEAD package [CSS10], the fmlcd +implementation retains similar validity and power, but runs significantly faster. We also investigate using +parametric Gaussian mixture model fits to illustrate the effect of inefficiency in E on power, for which we use +the EM algorithm as implemented in the mclust package [SFMR16]. All simulations were executed on an +AMD Ryzen 5650U processor, a medium range CPU for a laptop computer. +5.2.1 +Fully Nonparametric tests +Figure 2 shows the behaviour of our instantiation of the algorithm with the fully nonparametric approach of +using kernel density estimators as E over p(·; µ, d) for d = 1, 2 as µ is varied, run at the size α = 0.1, with +I = 20 for d ∈ {1, 2, 3} and I = 25 for d = 4. We plot five traces which record the fraction of runs out of 100 +independent runs that the test rejected the null hypothesis at times smaller than 20, 40, 60, 80, and 100, where +100 was the horizon over which the test was run. +There are three major observations. Firstly, we observe that the test shows excellent validity. Indeed, the +null hypothesis holds true for µ ≤ 2, and the test does not reject more than a 0.02 fraction of the time in +either case in this scenario. Secondly, we observe that at least for sufficiently large µ, all of the tests do reject +within 100 steps. Finally, we notice that the power sharply drops as d increases. To concretely discuss this, +let µ∗(d) be defined as the smallest value of µ for which PX∼p(·;µ,d)(τ0.1 ≤ 100) = 0.9. The plots in figure 2 +give us estimates of µ∗(d), which increase sharply with d—from about 6 in d = 1 to over 1000 in d = 4.3 +This reduction in power is perhaps expected, given the considerable deterioration in the nonparametric +estimation rates with d. Nevertheless, we may question how much of the above decay in power is driven by +the inefficiencies in fitting log-concave MLEs, and how much accrues due to the inefficiency of kernel density +estimation. We investigate this effect in §5.2.2 by studying Oracle tests. +Longer Run for d = 1. +To show that the validity persists over longer time horizons, we implement the +fully nonparametric method over 500 time steps for d = 1, using I = 50. Observe in Figure 3 that rejection +under the null µ ≤ 2 is well controlled even at this increased timescale, while rejection rates steadily improve +as the horizon grows, although the improvement is somewhat marginal over the horizon of 500 versus 200. +3With pilot simulations in d = 5, we observe that µ∗(5) ≈ 1500. We note that these simulations were already too costly, in +terms of time, to implement completely for d = 5, due both to the increased costs of fitting MLEs in higher dimensions, and due +to the fact that as rejection rates decrease with dimension, more runs need to be executed over the whole horizon, which extends +the total cost of the experiment. We hope to implement the method on larger computational resources for such moderate ds. +19 + +Figure 2: Performance of the fully nonparametric test. Empirical rejection rates (over 100 simulations) +at α = 0.1 versus the mean difference µ for fully nonparametric test implementations over four cases: d = +1, I = 20 (top left); d = 2, I = 20 (top right); d = 3, I = 20 (bottom left) ; d = 4, I = 25 (bottom right). The +thin horizontal line plots the level α = 0.1, and the vertical line marks µ = 2 since p(·; µ, d) ∈ L ⇐⇒ µ ≤ 2. +Observe the strong validity properties in all plots, as well as the deterioration of power in higher dimensions, +as signalled by the sharp increases in the scales of the X-axis. +Figure 3: Performance of the fully nonparamet- +ric test over long horizons. Empirical rejection +rates (over 100 simulations) in the setting of Figure 2 +for d = 1, ran over a horizon of length 500 with +I = 50. Observe that the validity persists over this +longer horizon, and that power improves for µ > 2. +Figure 4: Effect of I on the fully nonparametric +test. Empirical rejection rates (over 100 simulations) +in the setting of Figure 2 for d = 1 and with varying +I ∈ {1, 10, 20, 50}. Observe that the rejection rates +for I = 10, 20 are roughly the same as for I = 1, +while I = 50 suffers large losses. +20 + +Figure 5: Performance of the partial oracle test. Empirical rejection rates (over 100 simulations) at +α = 0.1 versus the mean difference µ for the partial oracle test implementations over four cases: d = 1, I = 20 +(top left); d = 2, I = 20 (top right); d = 3, I = 20 (bottom left) ; d = 4, I = 25 (bottom right). Observe that +in each plot, the power improves starkly relative to the fully nonparametric test (Figure 2), as indicated by a +strong contraction of the scale of the X-axis, especially in higher dimensions. +Effect of Batching Interval. +As seen in Figure 4, batch sizes of I = 10 and 20 have a mild effect on the +rejection rates under alternate setting (µ ≥ 2) when compared to the direct I = 1. Interestingly, note that +I = 20 does somewhat better than I = 10 in the setting of moderate µ (the range 4 − 6), and slightly loses +power for larger intermediate µ (the range 6 − 8). In turn, the no batching setting, i.e., I = 1, is observed to +suffer deterioration in its size (µ < 2), although this remains at an acceptable level. +The large batch size I = 50 suffers the same validity issues as I = 1, but does even better than it for small +but non-null values of µ (2-5). Power considerably deteriorates for larger µ (5-10). While it is unclear how +much of this is an artefact of the fact that the length of the horizon is only 100, and how much is directly +due to the larger batching interval, the fact that I = 10, 20 perform well suggests that so long as the batching +interval is a relatively small fraction of the horizon length, the loss in power is not too bad. +5.2.2 +Oracle Tests, and the Effect of the Quality of E +Oracle Tests. +To probe the effect of the lossiness of the kernel density estimate on the power of the fully +nonparametric test, we run ‘partial-oracle’ and ‘full-oracle’ oracle tests, which adjust E to exploit concrete +information about the underlying laws p(·; µ, d). In the partial-oracle, we adjust E to estimate a two-component +Gaussian mixture model instead of a kernel density estimate, and in the full-oracle case, we directly set +ˆqt−1(·) = p(·; µ, d), i.e., we exactly evaluate the density. +We expect that under data drawn from p(·; µ, d), these tests are more powerful than the fully nonparametric +tests discussed above, since the regret ρt(E ; p) would reduce in the case of the partial oracle due to a reduced +21 + +Figure 6: Perforamance of the full oracle test. Empirical rejection rates (over 100 simulations) at α = 0.1 +versus the mean difference µ for the full oracle test implementations over four cases: d = 1, I = 20 (top left); +d = 2, I = 20 (top right); d = 3, I = 20 (bottom left) ; d = 4, I = 25 (bottom right). Observe the sharp +improvement in power compared to Figure 2, especially in high dimensions, as indicated by a strong contraction +in the scale of the X-axis. Observe also the improvement in power compared to Figure 5, in that the curves +reach high power at about half the µ that is needed for the partial oracle test. +complexity of the estimation class; and, of course, would reduce exactly to 0 in the case of the full oracle. In +either case, this effectively serves to increase Rt. These oracle tests thus let us probe the extent of the loss in +power at a fixed µ (and thus a fixed distance from log-concavity) that arise purely due to the decay in rate of +convergence of the log-concave MLE. In particular, the full oracle test captures exactly this effect, while the +partial oracle test approaches this in a soft way. Figure 5 shows the performance of the partial oracle tests, +and Figure 6 shows the same for the full oracle test for d ∈ {1, 2, 3, 4}. +Comparing Figures 2 and 5, we see that for using the partial oracle yields a marked increase in power, +at least for d > 1. This is evident in d = 2 by observing that the purple lines (overall rejection rate within +100 times steps) rises higher and is nonzero at smaller values of µ, as well as observing that the typical +rejection time decreases substantially (for instance, rejection never happened below time step 60 in the fully +nonparametric case, but is quite prevalent at higher µs under the partial oracle). In d = 3, 4 the effect is +much starker - notice that the scale of the plot completely changes, from order of hundreds to tens in d = 3. +This suggest that using the parametric mixture of Gaussians estimate offers strong improvements over the +nonparametric KDE estimate due to the reduced variance scale of this estimator. +The above effect is seen even more starkly in the case of the fully oracle test, where each of the rejection +rate curves is further improved (Figure 6). For instance, our estimate of µ∗(d) (the smallest µ such that +Pp(·;µ,d)(τ0.1 ≤ 100) = 0.9)) is about halved for the full oracle case when compared to the partial oracle (and +improved manifold relative to the fully nonparametric test). +22 + +The Quality of E has a Strong Effect. +These observations from the oracle tests indicate that the quality +of the estimate offered by E is very important in driving the overall power of the test. In these oracle examples, +the quality improved by reducing the variance scale of the estimator, whilst keeping the bias at 0 (since the +law p(·; µ, d) is representible by each of the estimator outputs). +Of course, in practice we cannot always hope to reduce the variance scale of our estimates whilst keeping +the bias zero. Nevertheless, there is a tradeoff between the two implicit here. Indeed, as we discussed briefly in +§4, it is possible to use a biased E in the test, i.e. one that does not strictly estimate p, so long as the output of +E does a better job of representing p than the log-concave MLE. The strong dependence of the testing power +on E indicates the critical need to investigate this design freedom, and to study how the trade-off between +the variance, in terms of the convergence rates of ˆqt, and the bias, i.e., the distance of lim ˆqt from p, should +be balanced to optimise the testing power. +6 +Discussion +Our work has shown that the sequential testing of log-concavity throws interesting challenges, in that the +prevalent paradigm of test martingales cannot be fruitfully applied to this practically relevant setting. In the +process of doing this, we developed a characterisation of the closed fork-convex hulls of independent sequential +laws on a continuous space, thus contributing to the theory of this new tool that characterises the nonnegative +supermartingale property. We then showed that the universal likelihood e-process instead does yield powerful +tests for log-concavity. In particular, we demonstrated that these tests are consistent against large classes of +nonparametric alternate laws, and further admit nontrivial rates, and made contributions to the off-the-model +analysis of the convergence of log-concave MLEs, as well as the general theory of the power analysis of universal +tests in order to do so. These properties are validated by running the test over a simple parametric family +of laws, which further demonstrates the critical role of the sequential estimator E in the power of the test. +Taking a broad view, the above can also be seen as a contribution to the emerging literature on e-processes, +and in particular as additional evidence for the case that the study of sequential testing at large must exploit +this powerful yet simple tool. +A number of directions, both theoretical and methodological remain open in this interesting subject, a few +of which we discuss below. +Regarding fork-convexity, our characterisation in §3 and §A of the closed fork-convex hulls of i.i.d. Gaus- +sians can possibly be further enriched, and it would be very interesting to understand precisely which laws lie +in this set. Additionally, notice that sequentially testing the Gaussiantiy of an i.i.d. process itself is a basic +problem that again cannot be tested using martingales (at least with respect to the natural filtration of the +data). Construction and analysis of such sequential Gaussianity tests is a natural and interesting direction. +Of course universal inference is again a natural approach for this class, but it may be possible to take ad- +vantage of translation and rotation invariance of the null hypothesis (all Gaussians) using methods developed +in [PLHG22]. +Regarding the ULR e-process based test for log-concavity, §5 shows that the power of the fully nonpara- +metric test can be quite limited particularly as the data dimension increases. This observation was also made +in the non-sequential setting by [DGWR21], who proposed using random one-dimensional projections as an +interesting method to ameliorate this. In this test, rather than computing the full d-dimensional kernel and +log-concave estimates, one projects the data onto many one-dimensional subspaces, and averages the e-values +(nonnegative test statistics with expectation at most one under the null) that result from a one-dimensional +test carried on each of these projected datasets. This approach not only has computational benefits due to +the speed of one-dimensional density estimation methods, but also shows statistical benefits in the scenario +of §5, in that the decay of power is considerably limited with dimension. Such projected tests are of course +possible in the sequential setting as well, and are a natural next step to investigate, both methodologically +and in terms of their theoretical properties. +On a broader scale, both the theoretical bounds and the simulations illustrate the critical role that the +quality of the estimator E plays, both specifically in the power of the test for log-concavity, but also more +generally in the use of the universal likelihood ratio e-process. With this in mind, and recalling the implicit +‘bias-variance’ tradeoff in E as discussed in §4 and §5, investigating the choice of E relative to the null class +23 + +is an interesting question both in terms of practical methodological concerns, as well as theoretical concerns +studying the power of e-process based tests. +Acknowledgments +The authors thank Martin Larsson for insightful discussions on fork convexity, and Robin Dunn, for an +implementation for a batched universal test for log-concavity that formed the backbone of the code underlying +our simulations. A. Rinaldo and A.G. were supported in part by the NSF grant DMS-EPSRC 2015489. +References +[AHZ21] +Sebastian Arnold, Alexander Henzi, and Johanna F Ziegel. “Sequentially valid tests for forecast +calibration”. In: arXiv preprint arXiv:2109.11761 (2021) (cit. on p. 3). +[AS72] +Daniel Alspach and Harold Sorenson. “Nonlinear Bayesian estimation using Gaussian sum ap- +proximations”. In: IEEE Transactions on Automatic Control 17.4 (1972), pp. 439–448 (cit. on +p. 11). +[Axe+19] +Brian Axelrod, Ilias Diakonikolas, Alistair Stewart, Anastasios Sidiropoulos, and Gregory Valiant. +“A polynomial time algorithm for log-concave maximum likelihood via locally exponential fami- +lies”. In: Advances in Neural Information Processing Systems 32 (2019) (cit. on pp. 1, 17). +[Bac10] +Athanassia Bacharoglou. “Approximation of probability distributions by convex mixtures of +Gaussian measures”. In: Proceedings of the American Mathematical Society 138.7 (2010), pp. 2619– +2628 (cit. on p. 11). +[BB06] +Mark Bagnoli and Ted Bergstrom. “Log-concave probability and its applications”. In: Rationality +and Equilibrium. Springer, 2006, pp. 217–241 (cit. on p. 1). +[Bro76] +Efim Mikhailovich Bronshtein. “ε-entropy of convex sets and functions”. In: Siberian Mathematical +Journal 17.3 (1976), pp. 393–398 (cit. on p. 36). +[BS21] +Rina Foygel Barber and Richard J Samworth. “Local continuity of log-concave projection, with +applications to estimation under model misspecification”. In: Bernoulli 27.4 (2021), pp. 2437– +2472 (cit. on pp. 15, 16, 35). +[CD18] +José E Chacón and Tarn Duong. Multivariate Kernel Smoothing and its Applications. Chapman +and Hall/CRC, 2018 (cit. on p. 19). +[CDSS18] +Timothy Carpenter, Ilias Diakonikolas, Anastasios Sidiropoulos, and Alistair Stewart. “Near- +optimal sample complexity bounds for maximum likelihood estimation of multivariate log-concave +densities”. In: Conference on Learning Theory. PMLR. 2018, pp. 1234–1262 (cit. on pp. 1, 15). +[CS10] +Madeleine Cule and Richard Samworth. “Theoretical properties of the log-concave maximum +likelihood estimator of a multidimensional density”. In: Electronic Journal of Statistics 4 (2010), +pp. 254–270 (cit. on pp. 1, 7, 14, 33). +[CSS10] +Madeleine Cule, Richard Samworth, and Michael Stewart. “Maximum likelihood estimation of +a multi-dimensional log-concave density”. In: Journal of the Royal Statistical Society: Series B +(Statistical Methodology) 72.5 (2010), pp. 545–607 (cit. on pp. 1, 19). +[DGWR21] +Robin Dunn, Aditya Gangrade, Larry Wasserman, and Aaditya Ramdas. “Universal inference +meets random projections: a scalable test for log-concavity”. In: arXiv preprint arXiv:2111.09254 +(2021) (cit. on pp. 1, 14, 18, 23, 33). +[DR11] +Lutz Dümbgen and Kaspar Rufibach. “logcondens: Computations related to univariate log- +concave density estimation”. In: Journal of Statistical Software 39 (2011), pp. 1–28 (cit. on pp. 1, +19). +[FGNV12] +Valentina Fedorova, Alex Gammerman, Ilia Nouretdinov, and Vladimir Vovk. “Plug-in martin- +gales for testing exchangeability on-line”. In: arXiv preprint arXiv:1204.3251 (2012) (cit. on +p. 4). +24 + +[GHK19] +Peter Grünwald, Rianne de Heide, and Wouter M Koolen. “Safe testing”. In: arXiv preprint +arXiv:1906.07801 (2019) (cit. on p. 3). +[GW17] +Fuchang Gao and Jon A Wellner. “Entropy of convex Functions on Rd”. In: Constructive Ap- +proximation 46.3 (2017), pp. 565–592 (cit. on p. 36). +[Han21] +Qiyang Han. “Set structured global empirical risk minimizers are rate optimal in general dimen- +sions”. In: The Annals of Statistics 49.5 (2021), pp. 2642–2671 (cit. on pp. 15, 17). +[HRMS20] +Steven R Howard, Aaditya Ramdas, Jon McAuliffe, and Jasjeet Sekhon. “Time-uniform Chernoff +bounds via nonnegative supermartingales”. In: Probability Surveys 17 (2020), pp. 257–317 (cit. on +p. 3). +[HW16] +Qiyang Han and Jon A Wellner. “Approximation and estimation of s-concave densities via Rényi +divergences”. In: Annals of statistics 44.3 (2016), p. 1332 (cit. on p. 17). +[KDR19] +Gil Kur, Yuval Dagan, and Alexander Rakhlin. “Optimality of maximum likelihood for log- +concave density estimation and bounded convex regression”. In: arXiv preprint arXiv:1903.05315 +(2019) (cit. on pp. 1, 15, 36). +[KKMS08] +Adam Tauman Kalai, Adam R Klivans, Yishay Mansour, and Rocco A Servedio. “Agnostically +learning halfspaces”. In: SIAM Journal on Computing 37.6 (2008), pp. 1777–1805 (cit. on p. 1). +[KS16] +Arlene KH Kim and Richard J Samworth. “Global rates of convergence in log-concave density +estimation”. In: The Annals of Statistics 44.6 (2016), pp. 2756–2779 (cit. on pp. 15, 16, 35, 36). +[Lo72] +J Lo. “Finite-dimensional sensor orbits and optimal nonlinear filtering”. In: IEEE Transactions +on Information Theory 18.5 (1972), pp. 583–588 (cit. on p. 11). +[Mey66] +P.A. Meyer. Probability and Potentials. Actualités scientifiques et industrielles. Blaisdell Publish- +ing Company, 1966 (cit. on p. 3). +[PBKR22] +Aleksandr Podkopaev, Patrick Blöbaum, Shiva Prasad Kasiviswanathan, and Aaditya Ramdas. +“Sequential Kernelized Independence Testing”. In: arXiv preprint arXiv:2212.07383 (2022) (cit. +on p. 3). +[PLHG22] +Muriel Felipe Pérez-Ortiz, Tyron Lardy, Rianne de Heide, and Peter Grünwald. “E-Statistics, +Group Invariance and Anytime Valid Testing”. In: arXiv preprint arXiv:2208.07610 (2022) (cit. +on p. 23). +[RGVS22] +Aaditya Ramdas, Peter Grünwald, Vladimir Vovk, and Glenn Shafer. “Game-theoretic statistics +and safe anytime-valid inference”. In: arXiv preprint arXiv:2210.01948 (2022) (cit. on pp. 3–5). +[RLKR22] +Johannes Ruf, Martin Larsson, Wouter M Koolen, and Aaditya Ramdas. “A composite gen- +eralization of Ville’s martingale theorem”. In: arXiv preprint arXiv:2203.04485 (2022) (cit. on +p. 5). +[RRLK20] +Aaditya Ramdas, Johannes Ruf, Martin Larsson, and Wouter Koolen. “Admissible anytime-valid +sequential inference must rely on nonnegative martingales”. In: arXiv preprint arXiv:2009.03167 +(2020) (cit. on pp. 3, 4, 6). +[RRLK22] +Aaditya Ramdas, Johannes Ruf, Martin Larsson, and Wouter M Koolen. “Testing exchangeabil- +ity: Fork-convexity, supermartingales and e-processes”. In: International Journal of Approximate +Reasoning 141 (2022), pp. 83–109 (cit. on pp. 4, 8, 9). +[RS19] +Fabian Rathke and Christoph Schnörr. “Fast multivariate log-concave density estimation”. In: +Computational Statistics & Data Analysis 140 (2019), pp. 41–58 (cit. on pp. 1, 17, 19). +[Sam18] +Richard J Samworth. “Recent progress in log-concave density estimation”. In: Statistical Science +33.4 (2018), pp. 493–509 (cit. on p. 1). +[SFMR16] +Luca Scrucca, Michael Fop, T Brendan Murphy, and Adrian E Raftery. “mclust 5: clustering, +classification and density estimation using Gaussian finite mixture models”. In: The R Journal +8.1 (2016), p. 289 (cit. on p. 19). +[SR21] +Shubhanshu Shekhar and Aaditya Ramdas. “Nonparametric two-sample testing by betting”. In: +arXiv preprint arXiv:2112.09162 (2021) (cit. on p. 3). +25 + +[SW14] +Adrien Saumard and Jon A Wellner. “Log-concavity and strong log-concavity: a review”. In: +Statistics Surveys 8 (2014), p. 45 (cit. on pp. 7, 19). +[Vaa94] +Aad van der Vaart. “Bracketing smooth functions”. In: Stochastic Processes and their Applications +52.1 (1994), pp. 93–105 (cit. on p. 38). +[Vil39] +Jean Ville. “Etude critique de la notion de collectif”. In: Bull. Amer. Math. Soc 45.11 (1939), +p. 824 (cit. on p. 3). +[VNG03] +Vladimir Vovk, Ilia Nouretdinov, and Alexander Gammerman. “Testing exchangeability on-line”. +In: Proceedings of the 20th International Conference on Machine Learning (ICML-03). 2003, +pp. 768–775 (cit. on p. 4). +[VW96] +Aad W Vaart and Jon A Wellner. “Weak convergence”. In: Weak Convergence and Empirical +Processes. Springer, 1996, pp. 16–28 (cit. on p. 33). +[WR23] +Ian Waudby-Smith and Aaditya Ramdas. “Estimating means of bounded random variables by +betting”. In: Journal of the Royal Statistical Society (Series B), to appear with discussion (2023) +(cit. on p. 3). +[WRB20] +Larry Wasserman, Aaditya Ramdas, and Sivaraman Balakrishnan. “Universal inference”. In: Pro- +ceedings of the National Academy of Sciences 117.29 (2020), pp. 16880–16890 (cit. on pp. 1, 5, +13). +[WS95] +Wing Hung Wong and Xiaotong Shen. “Probability inequalities for likelihood ratios and conver- +gence rates of sieve MLEs”. In: The Annals of Statistics (1995), pp. 339–362 (cit. on pp. 14, 17, +33, 37). +26 + +A +Proof of Triviality and Properties of Fork-Convex Hulls +This appendix is devoted to showing the structural lemmata regarding fork-convex hulls, and discussing +technical aspects of our arguments. +A.1 +Details on the Local L1(Γ) Closure +Let us begin by explicitly detailing the notion of convergence implicit in closed fork-convex combinations. +Recall that the f-conv(P) is the closure of of f-conv(P) with respect to L1(Γ)-convergence of likelihood +ratio processes at every fixed time t. Let us unpack this statement in simple terms. Let Pn be some sequence +in f-conv(P) of density ratio Zn +t := ZPn +t . We say that Pn → P if for every t, it holds that Zn +t → Zt in L1(Γ). +Since Zt and Zn +t are Ft measurable objects, this convergence is simply in L1(Γ|t). Stating that the convergence +needs to happen at every fixed time t means that this convergence need not be uniform in t: it is fine for Zn +100 +to converge more slowly than Zn +1 , for instance. This notion of convergence may be metrised by +∆(P, Q) := +� +t∈N +2−t∥ZP +t − ZQ +t ∥L1(Γ). +We note that ∆ is bounded, since +∥ZP +t − ZQ +t ∥L1(Γ) = +� ���P|t(dxt +1) − Q|t(dxt +1) +��� ≤ +� +P|t(dxt +1) + +� +Q|t(dxt +1) = 2. +With this in hand, we first show the following auxiliary claim that is repeatedly used. +Lemma 16. Let P be a set of sequential laws, and let R be any sequential law. Suppose there exists a sequence +of sequential laws {RT} such that each RT ∈ f-conv(P), and for all t ≤ T, ZRT +t += ZR +t . Then R ∈ f-conv(P). +Proof. We claim that RT → R. Indeed, since ZRT +t += ZR +t for all t ≤ T, +∆(RT , R) ≤ +� +t>T +2−t · 2 = 2−(T −1). +Thus, limT →∞ ∆(RT , R) = 0, meaning RT → R. Since the closed fork-convex hull of P includes such limits by +definition, the claim is proved. +The above lends significant convenience to our arguments, since it allows us to only construct processes +matching some claimed member of the fork-convex hull up to finite times, which is typically easy to do in our +arguments below using just finite fork-convex combinations. +A.2 +Proofs about the Fork-Convex Hull of Independent Sequential Laws +We may now proceed with the proofs of the Lemmata omitted from §3. +Proof of Lemma 4. As detailed in the main text, by taking repeated fork-convex combinations, it follows that +RT ∈ f-conv(P∞), where +R1 := P ∞ +1 , RT := +� +RT −1 T −1,0 +−→ P ∞ +T +� +, +where validity of the mixture weight 0 exploits the mutual absolute continuity of laws in P. We conclude by +Lemma 16. +Proof of Lemma 5. It suffices to show that for all finite k, P∞ +k +⊂ f-conv(P∞), since P∗ = � +k Pk, and Pk ⊂ +Pk+1 for all k. +For T ∈ N and two laws P, Q on Rd, define the sequential law RP,Q,T as the law of an +independent sequence {Xt} such that Xt ∼ P for t ≤ T and Xt ∼ Q for t > T , i.e. RP,Q,T = +� +P ∞ T,0 +−→ Q∞� +. +For T ∈ N, define Pk,T as the set of sequential laws of the form RP,Q,T with P ∈ Pk and Q ∈ P. +27 + +We first claim that Pk,T ⊂ f-conv(P ∞). We show this inductively in k. Fix any T , and observe that +trivially P1,T lies in this fork-convex hull. For k ≥ 2, we may represent each P ∈ Pk as P = αP 1 + (1 − α)P 2 +for some α ∈ [0, 1], P 1 ∈ Pk−1 and P 2 ∈ P. We need to show that for any such P, and any Q ∈ P, RP,Q,T lies +in the fork-convex hull of P∞. By the induction hypothesis, RP 1,Q,T ∈ f-conv(P∞), and RP 2,Q,T ∈ f-conv(P∞). +But then define the laws +S0 := RP 1,Q,T , �Sτ := +� +Sτ−1 τ−1,α +−→ RP 2,Q,T +� +, Sτ := +� +�Sτ +τ,0 +−→ RP 1,Q,T +� +. +We note that every fork-convex combination above has valid weights since P is m.a.c., and so no density +process is ever 0. We claim that ST = RP,Q,T . +Indeed, let p1, p2, q respectively denote the densities (with respect to the standard Gaussian) of P 1, P 2, +and Q, and let Z1 +t and Z2 +t be the density processes of RP 1,Q,T and RP 2,Q,T respectively. These can be explicitly +evaluated as +Zi +t = +� +s≤min(t,T ) +pi(Xs) · +t� +s=min(t,T +1) +q(Xs), +where i ∈ {1, 2}, and we note that for u < v, �u +s=v · = 1. Observe that for each i, and any t1 < T, and t > t1, +we have +Zi +t +Zi +t1 += +min(t,T ) +� +s=min(t1,T )+1 +pi(Xs) · +t� +s=min(t,T +1) +q(Xs). +We shall inductively claim that for each τ, the density process of Sτ satisfies +ZSτ +t += +� +s≤min(t,τ) +(αp1(Xs) + (1 − α)p2(Xs)) · +min(t,T ) +� +s=min(t,τ+1) +p1(Xs) · +t� +s=min(t,T +1) +q(Xs). +Indeed, the base claim is trivial since for τ = 0 since S0 = RP 1,Q,T . Assuming the induction hypothesis for +τ, we observe that since ˜Sτ+1 is a fork-convex combination of Sτ and RP 2,Q,T at time τ, it shares the density +process of Sτ up to time τ, while after that time the density is a mixture of the two density processes, giving +Z~Sτ+1 +t += +� +s≤min(t,τ) +(αp1(Xs) + (1 − α)p2(Xs)) +× + +α + + + +min(t,T ) +� +s=min(t,τ+1) +p1(Xs) · +t� +s=min(t,T +1) +q(Xs) + + + + (1 − α) + + + +min(t,T ) +� +s=min(t,τ+1) +p2(Xs) · +t� +s=min(t,T +1) +q(Xs) + + + + + , +where we have used the behaviour of Zi +t/Zi +τ above for t ≥ τ + 1. +Finally, Sτ+1 mixes the above with RP 1,Q,T at time τ + 1 with a mixture weight of 0. This means that the +suffix law of Sτ+1 beyond the time τ + 2 is exactly equal to the law of RP 1,Q,T , while the prefix up to time +τ + 1 is left alone. In other words, +ZSτ +t += +� +s≤min(t,τ) +(αp1(Xs) + (1 − α)p2(Xs)) · +τ+1 +� +s=min(t,τ+1) +(αp1(Xs) + (1 − α)p2(Xs)) +× +min(t,T ) +� +s=min(t,τ+2) +p1(Xs) · +T +� +s=min(t,T +1) +q(Xs). +The claim follows upon noticing that the first two products can be merged into � +s≤min(t,τ+1)(αp1(Xs) + +(1 − α)p2(Xs). +With this in hand, the argument is straightforward. For any element P ∈ P∞ +k , we note that there exists +some member of Pk,T , say PT such that the density process of PT matches that of P up to time T . Applying +Lemma 16 immediately yields the claim. +28 + +Proof of Lemma 6. Let P = �{Pt} for any arbitrary sequence of Pt ∈ P. +We need to show that P ∈ +f-conv(� P). But, since Pt ∈ P for each t, for each t there further exist sequences {P n +t }n∈N, with each +P n +t +∈ P, such that P n +t +→ Pt in L1(Γ). +Let Q := {P n +t +: t, n ∈ N}. +We note that � Q ⊂ � P +=⇒ +f-conv(� Q) ⊂ f-conv(� P). Let Q := f-conv(� Q). We shall argue that P ∈ Q. +Let PT be the sequential law with density process +ZPT +t += +�� +s≤t ps(Xs) +if t ≤ T +� +s≤T ps(Xs) · � +T T . +If we can show that for each T , PT ∈ Q, then the claim will follow, since PT → �{Pt} as in the argument of +Lemma 16, and since Q is closed under the relevant notion of convergence. +We shall show this inductively. Let P1,n be a sequential law with density Z1,n +t +:= pn +1(X1)·� +s>min(1,t) p1 +s(Xs). +Notice that P1,n ∈ � Q ⊂ Q for every n. Further, +∆(P1,n, P1) ≤ ∥Z1,n +1 +− ZP1 +1 ∥L1(Γ) → 0. +Thus P1 ∈ Q. +Now suppose that PT −1 ∈ Q for some T ≥ 2. For T, n ∈ N, define Qn as the sequential law of density ratio +ZQn +t +:= + + + + + +� +s T +. +It trivially follows that Qn ∈ � Q ⊂ Q for all n. Now, define +PT,n = +� +PT −1 T −1,0 +−→ Qn� +, +which is valid since each P n +t and Pt has are mutually absolutely continuous. But ZPT,n +t += ZPT −1 +t += ZPT +t +for +t ≤ T − 1, and for t ≥ T, +ZPT,n +t +− ZPT +t += ZPT +T −1 · (pn +T (XT ) − pT (Xt)) · +t� +s=T +1 +ps(Xs). +It follows that +∥ZPT,n +t +− ZPT +t ∥ = +� +0 +t < T +∥P n +T − PT ∥L1(Γ) +t ≥ T , +and therefore, ∆(PT,n, PT) ≤ ∥P n +T − PT ∥L1(Γ) → 0. By closeness of Q, we conclude that PT ∈ Q. +A.3 +Proof of Lemma 8 +Proof of Lemma 8. Fix an m ∈ N. Since E has positive mass and is measurable, there exists an open set +O ∈ (Rd)t such that O ⊃ E and Lebdt(O) ≤ (1 + 1/m)Lebdt(E). Observe here that ‘most’ of the mass of O +lies within E. +Since O is open, there exists a sequence of disjoint open rectangles Ri in (Rd)t such that � Ri ⊂ O ⊂ � Ri, +and +Lebdt +�� +Ri +� += +� +Lebdt(Ri) = Lebdt(O). +Further, since most of the mass of O lies in E, we conclude that there exists at least one i such that +Lebdt(Ri) > 0 +and +Lebdt(E ∩ Ri) ≥ +m +m + 1Lebdt(Ri). +Indeed, otherwise we would have +Lebdt(E) = Lebdt(E ∩ O) = +� +Lebdt(E ∩ Ri) < +m +m + 1 +� +Lebdt(Ri) ≤ +m +m + 1 · m + 1 +m +Lebdt(E), +which is impossible. +29 + +A.4 +Technical Aspects of Fork-Convex Hulls and Our Triviality Argument +We comment on some technical aspects of the argument underlying the non-existence of nontrivial NSMs. +Specifically, we discuss the necessity of our definition of nontriviality, and the m.a.c. condition repeatedly used +in the argument, how the argument can be extended to consider log-concave laws over bounded sets, and +finally issues that arise when one tries to relax the definition of fork-convex combinations to handle support +mismatch. +Going beyond almost sure triviality. +The main text defines trivial NSMs (and NMs) as those that are +Γ-almost surely non-increasing (respectively, constant). Could one instead show that there are no nontrivial +L∞-NSMs in the stronger sense that such processes must be non-increasing (as opposed to only almost surely +non-increasing)? This turns out to be impossible, as witnessed by the following process +Mt := +1 +1 − 1{∃(t1, t2, t3, t4) ∈ [1 : t]4 : Xt1 = Xt2, Xt3 = Xt4, Xt1 ̸= Xt3}. +Since log-concave measures can have at most one atom (due to unimodality), it follows that {Mt} is an +L∞-martingale (indeed, it is almost surely just a constant 1, as stated by the theorem). However, Mt does +diverge to ∞, and this occurs almost surely against any i.i.d. sequential law which has at least two atoms, +for instance, a coin flip process. This means that while it may not be possible to reject processes with a +Lebesgue density using test martingales, it is possible to reject atomic processes. Structurally, this example +has to do with the fact that one cannot approach point masses in an L1 sense using measures with density. +Therefore, although L∞-NSMs must also be NSMs for independent processes with densities, this does not +extend to sequences drawn from distributions with atoms. In another sense, this issue is the same as the +problem discussed below regarding loss of the NSM property under extensions of fork-convex combinations of +laws with support mismatch, in that two laws with distinct single atoms each have parts that are mutually +singular. +The role of the mutual absolute continuity condition on P. +The definition of fork-convex combinations +of two laws P and Q at time s involves the ratio of density processes ZP +s/ZQ +s. This ratio must indeed appear, as +can be seen from the algorithmic viewpoint of §3 to account for the fact that if R is the fork-convex combination, +then the prefix law R|s = P|s. However, if ZQ +s = 0, i.e. if for {Xt} ∼ R, the prefix Xs +1 lies in a set that is almost +surely impossible under Q, then the above ratio is meaningless. This observation underlies the condition that +if ZQ +s = 0, then the mixture weight h must be exactly 1. +Our argument ultimately asserts that any law of the form �{Pt} lies in f-conv(P∞). +However, our +constructions to demonstrate this fact rely on setting h = 0 in order to generate switches between different +laws in P. Our assumption of mutual absolute continuity is to enable precisely this flexibility without running +into the issue discussed in the previous paragraph. +The role of Gaussians in our argument. +Since we used the density of the Gaussians in order to show +that L∞-NSMs must also be � D-NSMs, it behooves us to ask how essential L ⊃ G is to the main point of the +result.4 In the argument, Gaussians play two roles: firstly, since all Gaussians are supported on the entirety +of the domain, this class is m.a.c., and we can flexibly take fork-convex combinations. Secondly, the triviality +of Gaussian NSMs follows since mixtures of Gaussians are L1-dense in the set of densities on the reals. Any +subset of L that satisfies these two properties will suffice for our purposes. +Extending the argument to log-concave laws on subsets of Rd. +We finally observe that our argument +extends in a straightforward manner to log-concave laws on restricted subsets of the reals: for a bounded +convex set K, define LK to be log-concave densities supported on K. +Then all L∞ +K -NSMs are trivial, in +the sense that they are almost surely nonincreasing with respect to the reference measure (Unif(K))∞. This +follows because truncated Gaussians are again dense and supported on the entirety of the domain K. +4Notwithstanding that the result is interesting in its own right for Gaussians, which tells us that there is a simple, and very +natural, parametric family that cannot be tested via nonnegative supermartingales. +30 + +To see this, first observe that if γ := � αiφi is a mixture of Gaussians, then for any K of nonzero Lebesgue +mass, the truncation γ|K is also a mixture of truncated Gaussians. Indeed, define θi = +� +K φi. Then +γ|K(x) = +� +αi +� αiθi +φi(x) · 1{x ∈ K} = +� +αiθi +� αiθi +φi|K(x). +Now, let p be any density supported on K, and let γn → p be a sequence of mixtures of Gaussians converging +so that dn := +� +|p − γn| → 0. Then, defining πn = +� +Kc γn, we have +� +|p − γn|K| = +� +K +|p(1 − πn) − γn| +1 − πn +≤ +� +K +|p − γn| +1 − πn ++ +� +K +πnp +1 − πn +≤ πn + +� +|p − γn| +1 − πn +. +Further, since p is supported on K, πn = +� +Kc γn ≤ +� +Kc γn + +� +K |p − γn| = dn. Therefore, +TV(p, γn|K) ≤ +2dn +1 − dn +→ 0. +But this means that we can run the entire argument of §3 but with Gaussians truncated over K, and draw +the same conclusion. +Can we extend nontrivial fork-convex combinations to all laws? +As we discussed above, due to the +“ZQ +T = 0 =⇒ h = 1” condition in the definition of fork-convex combinations, it is not possible to take arbitrary +fork-convex combinations between sequential laws. In the extreme case of P = P ∞ and Q = Q∞ for P, Q that +have disjoint support, the only possible fork-convex combinations are mixtures of the form αP ∞ + (1 − α)Q∞. +While this technicality did not pose a serious issue for the current paper, this situation is quite unsatisfying +in general. After all, the algorithmic view of fork-convex combinations is very natural, and extends to such +disjoint support situations easily. +One can formalise this algorithmic picture by exploiting conditional densities. For a sequence of (appro- +priately measurable) maps kP +t : (Rd)t−1 × Rd → R≥0, denoted kP +t (xt|xt−1 +1 +), we say that {kP +t } is the conditional +density process of P if for each xt−1 +1 +, kt(·|xt−1 +1 +) is a density with respect to Γ, and for any t, A ∈ Ft, +P(Xt +1 ∈ A) = +� +A +� +s≤t +ks(xs|xs−1 +1 +)Γ(dxt +1). +More generally, we can define a similar notion via Markov kernels. We observe that, by definition, it holds +that if P has a conditional density process, then for any t and Γ-almost every xt +1 that +ZP +t (xt +1) = +� +s≤t +ks(xs|xs−1 +1 +). +Using the above characterisation, we can give the following natural extended definition of fork-convex +combinations: for two sequential laws P, Q with conditional density processes {kP +t}, {kQ +t} respectively, a law R +is a fork-convex combination of P and Q at time T with FT -measurable weight h if +ZR +t = +�� +s≤t kP +s(xs|xs−1 +1 +) +t ≤ T +� +s≤T kP +s(xs|xs−1 +1 +) · +� +h �t +s=T +1 kP +s(xs|xs−1 +1 +) + (1 − h) �t +s=T +1 kQ +s(xs|xs−1 +1 +) +� +t > T , +(4) +the difference being that we now do not impose the restriction that h = 1 if ZQ +T = 0. Simplistically, this is +possible since we are never dividing by the potentially null ZQ +T , and more formally, this is considering the +formal ratio ZQ +t /ZQ +T , which is interpreted in the natural way as � kQ +s(xs|xs−1 +1 +). The above extended definition +genealizes our previous definition of fork-convex combinations, and we can extend the same to the fork-convex +hull and its closure. +While the density process above is a perfectly sound mathematical object, such an extension is not fruitful +because of a failure to preserve the NSM property under these extended fork-convex combinations in general. +To illustrate why the above extended definition fails to maintain the NSM property (unlike the restricted +one used in the paper), consider the following example. +31 + +Example 1. P = (Unif(0, 1))∞ and Q = (Unif(1, 2))∞, and the process +Mt := +� +2 +∃s1, s2 ≤ t : Xs1 ∈ (0, 1), Xs2 ∈ (1, 2) +1 +otherwise +. +This process is an NSM (indeed, a martingale) under both P, Q. However, under any nontrivial fork-convex +combination of these two laws, this process must start at 1, and with positive probability grow to 2 but never +fall, and thus cannot be a supermartingale. +Under the hood, the issue in the example above arises due to the fact that under the extended definition, +for t ≥ T +1, {ZR +t > 0} = {ZP +t > 0}∪{ZP +T > 0, �t +T +1 ks(Xs|Xs−1 +1 +) > 0}, but the NSM property of {Mt} under +P or Q only controls the conditional expectations of MtZP +t 1{ZP +t > 0} and MtZQ +t 1{ZQ +t > 0} under Γ, which +leaves the conditional behaviour of MtZR +t uncontrolled when R places mass on events that are null under one +of these laws. +It should be noted that in the above example there is a version of the process {Mt}, i.e., a process {� +Mt} +such that P(∀t, Mt = � +Mt) = Q(∀t, Mt = � +Mt) = 1, but such that {� +Mt} is a martingale even under extended +fork-convex combinations. Concretely this process is just � +Mt = 1. One may thus wonder if this phenomenon +holds true in greater in generality: is it the case that if {Mt} is an NSM under P and Q, then there is a version +{� +Mt} of it (under P and Q) such that {� +Mt} is an NSM against any extended fork-convex combination of P +and Q, without the restriction “ZQ +T = 0 +=⇒ +h = 1”? This turns out also to be impossible in general, as +demonstrated by the following example. +Example 2. Let P = Unif(0, 1)∞ and Q = Unif(0, 1/2)∞. Define ρt = 1{Xt ∈ (0, 1/2)} for t ≥ 1, and ρ0 = 0. +Let {Nt} be an adapted process such that +Nt = + + + + + +1 +ρt−1 = 1 +3/2 +ρt−1 = 0, ρt = 1 +1/2 +ρt−1 = 0, ρt = 0 +. +Finally define Mt = � +s≤t Nt. It is easy to check that Mt is an NM under both P and Q. +Now suppose R is an extended fork-convex combination of P, Q at time T ≥ 1 with mixture weight h < 1. +This means that with probability 1 − h, it holds that Xt ∈ (0, 1/2) with certainty for all t ≥ T + 1. As as result, +we can explicitly compute that +E[NT +1|FT ] = ρT + (1 − ρT ) ((1 − h) · 3/2 + h(1/2 · 3/2 + 1/2 · 3/2)) = +� +1 +ρT = 1 +1 + (1 − h)/2 +ρT = 0 , +and so as long as h < 1, E[NT +1|FT ] > 1 if ρT = 0, and therefore {Mt} violates the NSM property under R +at the time T + 1. Note here that it is hard to construct any nontrivially different version of the above process +since the law of P dominates that of Q. +In light of the above discussion, generalised definitions of fork-convex combinations are at loggerheads with +maintaining the NSM property these combinations. Of course, since our purpose in using fork-convexity is to +assert the triviality of NSMs over large classes of sequential laws, this latter property is essential to maintain for +such statistical applications. At the same time, while the restricted original definition does maintain the NSM +property, the included restriction is unsatisfying, and in conflict with the algorithmic intuition underlying +the idea of these combinations. Finding an appropriate generalised definition of fork-convex combinations +that abstains from imposing these support conditions, but nevertheless retains NSM closure under the NSM +property is an interesting, and challenging, question left for future work. +B +Proofs of Consistency and Power Analysis +Recall the notation σt(P) := � +s≤t log p(Xs) − log ˆpt(Xs). The main arguments of this section control the +behavious of σt(P), in particular arguing that if the Hellinger distance of P from log-concavity is large, then +32 + +σt(P) must eventually grow linearly. We show this in asymptotic and nonasymptotic regimes in §B.1 and §B.2 +respectively. +Corollary 11 and Corollary 15 each relies on further control on the behaviour of ρt(E ; p) = � +s≤t log p(Xs)− +log ˆqs−1(Xs) when p is a bounded Lipschitz law on the unit box. This argument is left to §B.3. +B.1 +Proof of Consistency +Our arguments rely on the following bracketing tail estimate, developed by Wong and Shen to analyse the +behaviour of sieve-based maximum likelihood estimates [WS95, Thm. 1]. The estimate involves the bracketing +entropy of a class of laws Q under the Hellinger metric. We refer the reader to the text of Van der Vaart and +Wellner [VW96] for a thorough introduction, and give a brief account. +A bracket [u, v] is defined by two functions u(x), v(x) such that u(x) ≤ v(x) for all x, and consists of the +set of all functions f such that u(x) ≤ f(x) ≤ v(x) for all x. Since we shall only be interested in functions +that are densities, we may restrict attention to nonnegative functions. The Hellinger size of such a bracket +[u, v] is defined as |[u, v]| = ∥√u − √v∥2/2. We say that a class of distributions Q is bracketed by {[ui, vi]}N +i=1 +if for all Q ∈ Q, there exists an i such that q ∈ [ui, vi], where recall that for a distribution Q, we denote its +density by q. Note that this bracketing is typically “improper”, i.e., ui, vi will generally not lie in Q (because +q integrates to one, and so its lower bracket u will integrate to less than one, and its upper bracket v will +integrate to more than one). The Hellinger bracketing entropy of Q at scale ζ is the logarithm of the most +parsimonious bracketing of Q by brackets of size at most ζ, i.e. +H[](Q, ζ) := inf{log N : Q has an N-sized Hellinger bracketing at scale ζ} +Note, of course, that bracketing entropies are nonincreasing in ζ. +Lemma 17. (Simplification of [WS95, Thm. 1]) For a class of distributions Q and a natural number t, define +εt as the smaller number ε such that +� √ +2ε +ε2/28 +� +H[](Q, ζ/10)dζ ≤ 2−11√ +tε2. +For every t and ε ≥ εt, it holds that for any law P such that dH(P, Q) ≥ ε, we have +P ⊗t + + inf +q∈Q +� +s≤t +log p(Xs) − log q(Xs) ≤ tε2/24 + + ≤ 4 exp +� +−Ctε2� +, +where C > 2−14 is a constant. +Informally, if P is far enough from Q in the Hellinger metric (where far enough is determined by the +bracketing entropy of Q), then it is exponentially unlikely (in the sample size) for the maximum log-likelihood +under Q to be linearly close to the log-likelihood under P. Exploiting this observation in our context requires +us to argue that eventually, the log-concave MLE ˆpt must lie in a set with small entropy. To this end, we +appeal to the following result due to Dunn et al., which extends the convergence analysis of Cule & Samworth +[CS10]. +Lemma 18. [DGWR21, Lem. 1] Consider any distribution P ∈ D1, not necessarily log-concave. For any +η > 0, there exists a bracket [uη, vη] of size at most η that contains the log-concave projection LP , and +eventually also contains the log-concave MLE ˆpt: P ∞(∃t0 : ∀t ≥ t0, ˆpt ∈ [uη, vη]) = 1. +In words, the lemma states that for large enough t, the log-concave MLE ˆpt is certain to lie in a very +small bracket around the log-concave projection LP of the true distribution P. With this in hand, we are in +a position to show Lemma 12, the main statement underlying the proof of Theorem 10. +Proof of Lemma 12. Let ε := dH(P, LP ) > 0. Define ηε = ε2/211. Using Lemma 18, we know that there exists +a bracket [u∗, v∗] such that |[u∗, v∗]| ≤ ηε and, almost surely, ˆpt ∈ [u∗, v∗] for all large enough t. But observe +33 + +that H[]([u∗, v∗], ε2/211) = 0, since the size of [u∗, v∗] is already ηε. Further, since LP ∈ [u∗, v∗], by the triangle +inequality, +dH(P, [u∗, v∗]) = +inf +Q∈[uη,vη] dH(P, Q) ≥ dH(P, LP ) − |[u∗, v∗]| ≥ ε(1 − 2−11) ≥ ε · +� +24/25. +Let us define +�σt(P) := +inf +q∈[u∗,v∗] +� +s≤t +log p(Xs) − log q(Xs). +By exploiting the above observations, Lemma 17 yields that for every t, +P ∞ � +�σt(P) ≤ tε2/25 +� +≤ 4 exp +� +−Ctε2� +. +Note further that if ˆpt ∈ [u∗, v∗], then since ˆpt is a maximum likelhood estimate, it must hold that +σt(P) = �σt(P). Let Es := {∀t ≥ s, ˆpt ∈ [u∗, v∗]} be the event that ˆpt lies in the small bracket after time s, +and At := {σt(P)/tε2 ≥ 1/25} be the event that σt(P) is larger than tε2/25. +By Lemma 17, for every fixed time s and t ≥ s, +P ∞(Ac +t ∩ Es) ≤ 4 exp +� +−Ctε2� +, +and since this upper bound is summable, by the Borel-Cantelli Lemma +0 = P ∞ +� +lim sup +t +(Ac +t ∩ Es) +� += P ∞ +� +(lim sup +t +Ac +t) ∩ Es +� +, +and so for any time s, +P ∞(lim sup +t +Ac +t) ≤ P ∞(lim sup +t +Ac +t ∩ Es) + P ∞(Ec +s) = P ∞(Ec +s). +By Lemma 18, ˆpt must eventually almost surely fall in [u∗, v∗], lims→∞ P ∞(Ec +s) → 0. Further notice that +lim sup +t +Ac +t = {σt(P)/tε2 < 1/25 infinitely often} = {lim inf σt(P)/tε2 < 1/25}. +Putting the observations together, we conclude upon sending s → ∞ that +P ∞(lim inf σt(P)/tε2 < 1/25) ≤ lim +s→∞ P ∞(Ec +s) = 0. +B.2 +Proofs Underlying the Power Analysis +We shall begin by stating the key lemmata underlying our argument, which exploit our bracketing entropy +control from Lemma 13 along with results in the literature that bound the maximum value attained by a +log-concave density in order to make the same effective. We then prove the main result, and conclude by +proving Lemma 13. +B.2.1 +Controlling the Maximum Value Attained by the Log-Concave MLE +The rate analysis quantitatively exploits Lemma 17. To do so, we first need bracketing entropy bounds for +log-concave laws, which is precisely the subject of Lemma 13. We recall that this controls the bracketing +entropy of the class Ld,B of log-concave laws with densities supported on [−1, 1]d that are bounded from above +by B, showing that +H[](Ld,B, ζ) = �O((B/ζ)max(d/2,(d−1))). +The role of B in the above is quantitatively unimportant as long as this constant does not scale with +relevant parameters. This fact is assured for log-concave laws with near identity covariance. Intuitively, since +the covariance is lower bounded in all directions, the laws cannot concentrate too much, and thus the value of +the density at the mode cannot be too large. This observation is encapsulated in the following result, which +follows trivially from the work of Kim & Samworth. +34 + +Lemma 19. [KS16, Cor. 6] Let Lγ +d denote the set of log-concave laws distributed on [−1, 1]d with covariances +lower bounded in the positive semidefinite order by γI. Then there exists a dimension dependent constant Cd +such that for any f ∈ Lγ +d, +max +x∈[−1,1]d f(x) ≤ γ−d/2Cd. +Of course, our bounds in Theorem 14 depend on ∆P , which roughly speaking only controls that the +covariance of the underlying law P. The relevance of this quantity arises from the following observation, due +to Barber and Samworth. +Lemma 20. [BS21, Cor. 8] Let P ∈ D1 be a law supported on [−1, 1]d such that +∆P := +min +v:∥v∥=1 Ep[|⟨v, X − Ep[X]⟩|] > 0. +Then there exists a dimension dependent constant cd such that Cov(LP ) ⪰ cd∆2 +P I. Further, there exists a +dimension-independent constant C such that for any t ≥ 2Cd3/∆2 +P , it holds with probability at least 1 − +2 exp +� +−Ct∆2 +P /d2� +that Cov(ˆpt) ⪰ cd∆2 +P +4 +I for the log-concave MLE ˆpt. +Proof of Lemma 20. The first observation is a direct restatement of Lemma 7 of Barber and Samworth. The +second statement follows from the fact that over v : ∥v∥ = 1, v �→ ⟨v, X − EP [X]⟩ is bounded by 2 +√ +d, and +is clearly continuous in v. Thus exploiting standard subGaussian concentration results over the unit ball, it +follows that with probability at least 1−2 exp +� +−Ct∆2 +P /d2� +, it holds that for the empirical law pt = 1 +t +� +s≤t δXs, +min +v:∥v∥=1 Ept[|⟨v, X − Ept[X]⟩|] ≥ ∆P /2. +But notice that ˆpt = Lpt, from which the claim follows by the first part. +Merging Lemmas 19 and 20 immediately yields the following observation, which serves as a concrete bound +for the scale of B we need to employ in Lemma 13. +Lemma 21. There exists a constant Cd depending only on d such that for any t ≥ 2Cdd3/∆2 +P , it holds with +probability at least 1 − 2 exp +� +−Cdt∆2 +P /d2� +that +max +x∈[−1,1]d ˆpt(x) ≤ Cd∆−d +P . +Proof. Employing Lemma 19, we observe that {max ˆpt ≤ (cd∆2 +P /4)−d/2} ⊂ {Cov(ˆpt) ⪯ cd∆2 +P /4I}, and the +latter has probability at least 1 − 2 exp +� +−t(C∆2 +P /d2) +� +for t ≥ 2Cd3/∆2 +P . Take Cd = max(C, c−d/2 +d +). +B.2.2 +Proof of Bounds on Rejection Times +With the above in hand, we may proceed with the main argument. +Proof of Theorem 14. Recall the definition σt := � +s≤t log p(Xs) − log ˆpt(Xs). We shall first lower bound σt +with high probability. +Let B be a quantity that we will choose later. Let εt denote the solution to the fixed point equation from +Lemma 17, instantiated with the bracketing entropy of Ld,B. Further, let define the event +Et := {ˆpt ∈ Ld,B}. +For any t, provided that such that εt ≤ dH(p, L) and ˆpt ∈ Ld,B, Lemma 17 yields that +σt ≥ +inf +q∈Ld,B:dH(p,q)≥dH(p,L) log +� +s≤t +p(Xs) +q(Xs) ≥ td2 +H(p, L) +24 +(5) +with probability at least 1 − exp +� +−Ctd2 +H(p, L) +� +− P ∞(Ec +t). +35 + +In the rest of the proof, we will determine the range of t that leads to a small enough value for εt to +ensure that the condition εt ≤ dH(p, L) is met and, at the same time, control P ∞(Ec +t). To this end, we deploy +Lemma 13. First, observe that for d ≥ 3 and for any positive constants c and C +� Cε +cε2 +� +˜O(Bd−1ζ−(d−1))dζ = �O(B(d−1)/2ε−(d−3)). +Note that polylogarithmic terms do not affect the main growth of the integral.5 Therefore, solving the fixed +point equation +�O(B(d−1)/2ε−(d−3)) = ε2t1/2, +we obtain that for d ≥ 3 +εt(B) = �O(B1/2t−1/2(d−1)), +where we highlight the dependence on the as yet undetermined quantity B. +A similar argument using the entropy bound ζ−d/2 yields εt(B) = �O(Bd/(d+4)t−2/(d+4)) for d ∈ {1, 2}. +Now define +T1(B) = inf{t : εt(B) ≤ dH(p, L)} +and observe that +T1(B) = +� �O(B(d−1)(dH(p, L))−2(d−1)) +d ≥ 3 +�O(Bd/2(dH(p, L)−(4+d)/2)) +d ∈ {1, 2} . +Finally, by Lemma 21, for B ≥ Cd∆−d +P +and t ≥ T2 := C∆2 +P /d2 the probability of the event Et is at least +1 − 2 exp +� +−tC∆2 +P /d2� +. Let us set B∗ = Cd∆−d +P +and let T0 := max(T1(B∗), T2). We obtain that the lower +bound +σt ≥ td2 +H(p, L) +24 +holds with probability at least 1−C exp +� +−tcd2 +H(p, L) +� +−C exp +� +−tc∆2 +P /d2� +for t ≥ T0. Now, observe that at any +time t ≥ max(T0, 600 log(1/α) +d2 +H(p,L) +), it holds with probability at least 1−πt−C exp +� +−td2 +H(p, L) +� +−C exp +� +−Ct∆2 +P /d2� +that +log Rt = σt − ρt ≥ td2 +H(p, L) +600 +≥ log(1/α), +and thus the probability that the rejection time τα := inf{t : Rt ≥ 1/α} exceeds the above bound is bounded +by πt + C exp +� +−td2 +H(p, L) +� ++ C exp +� +−Ct∆2 +P /d2� +. +B.2.3 +Proof of Bracketing Entropy Bound on Log-Concave Laws +We proceed to show Lemma 13. We note that the upper bound for d ≤ 3 was shown by Kim and Samworth +[KS16]. Below we focus on d ≥ 4. We shall exploit two existing results in the literature regarding convex sets +and functions. The first is essentially due to Bronshtein (also see [KDR19, Lem. 3]). +Lemma 22. [Bro76] Let Kd denote the collection of convex sets in [−1, 1]d. For any ζ > 0, there exists a +collection of pairs of convex sets Kd,ζ ⊂ Kd × Kd with log |Kd,ζ| = O(ζ−(d−1)/2) such that +• Every (K, K) ∈ Kd,ζ satisfies Lebd(K \ K) ≤ ζ. +• For every K ∈ Kd, exists (K, K) ∈ Kd,ζ satisfying K ⊂ K ⊂ K. +In other words, the bracketing entropy of convex sets under the set difference metric is controlled at rate +(d − 1)/2. Importantly, the bracketing demonstrated above is proper. This result may be extended to the +following bracketing entropy bound on convex functions as by Gao and Wellner. +Lemma 23. [GW17, Thm. 1.5] Let K be a convex set in [−1, 1]d, and let CK,B be the set of convex functions +upper bounded by B over K. Then the L2(K) bracketing entropy of CK,B at scale ζ is bounded as O((B/ζ)(d−1)). +5This can be seen by iterating the relation +� +xn logm x = xn+1 logm x +n+1 +− +m +n+1 +� +xn logm−1(x). +36 + +Above, the L2(K) metric is the usual L2 distance ∥f − g∥L2(K) = ( +� +K(f − g)2dx)1/2, and the L2(K) +bracketing entropy is the bracketing entropy when the size of a bracket [u, v] is |[u, v]| = ∥u − v∥L2(K). +With the above in hand, we may proceed with the proof. +Proof of Lemma 13. For any log-concave law f, let S := {x ∈ Rd : f(x) ≥ ζ3} = {x ∈ Rd : log f(x) ≥ 3 log ζ}. +Since f is log-concave, the set S is convex. As a result, by Lemma 22, there exists some convex set ˜S ∈ Kd,ζ2/B +such that Leb(S \ ˜S) ≤ ζ2/B and ˜S ⊂ S. Let ˜C ˜S,ζ,B denote a ζ-bracketing of convex functions bounded by +B on ˜S. Since, on ˜S, the function − log f is convex and is upper bounded by − log B, by Lemma 23 there +exists a bracket [−u, −l] ∈ ˜C ˜S,ζ/B,log B/ζ3 such that, on ˜S, l ≤ log f ≤ u, and +� +˜S(u(x) − l(x))2dx ≤ ζ2/B2. +Note that, on ˜S, f is lower bounded by −3 log ζ and that, without loss of generality, we may assume that +supx∈ ˜S u(x) ≤ log B, since this is already a pointwise upper bound on f. +Next, we construct the functions +x ∈ [−1, 1]d �→ U(x) := + + + + + +eu(x) +x ∈ ˜S +B +x ∈ S \ ˜S +ζ3 +x ∈ [−1, 1]d \ S +, +x ∈ [−1, 1]d �→ L(x) := + + + + + +el(x) +x ∈ ˜S +ζ3 +x ∈ S \ ˜S +0 +x ∈ [−1, 1]d \ S +. +Observe that U ≥ f ≥ L on [−1, 1]d. Furthermore, for ζ < 2−d, +� +( +√ +U − +√ +L)2dx = +� +˜S +(eu(x)/2 − el(x)/2)2dx + +� +S\ ˜S +Bdx + +� +[−1,1]d\S +ζ3dx +≤ +� +˜S +eu(x)(1 − eu(x)−l(x)/2)2dx + B · ζ2/B + 2dζ3 +≤ +� +˜S +B2(u(x) − l(x))2/4 dx + 2ζ2 +≤ B2 · ζ2/B2 + 2ζ2 = 3ζ2, +where we have exploited the fact that z �→ ez/2 is Lipschitz on [−∞, log C], with derivative bounded by +e(log B)/2/2 = +√ +B/2 to argue that e(u(x)−l(x))/2 − e0 ≤ +√ +B|u(x) − l(x) − 0|/2. +Since this construction can be carried out for any f, we conclude that we can construct a bracketing cover +of Ld,B at scale O(ζ) as the union of the bracketing covers of convex functions on each of the smaller sets in +Kd,ζ2/B. By Lemmas 22 and 23, the size of this cover is +exp +� +O((B/ζ)d−1) +� +· exp +� +O((log B/ζ3)(B)/ζ)d−1� += exp +� +�O((B/ζ)d−1) +� +, +and the claim now follows. Let us again observe that the resulting cover is improper, in that the maps U(x) +and L(x) are not log-concave. +B.3 +Regret Control for Bounded Lipschitz Laws on the Unit Box +As this subsection demonstrates, both Corollaries 15 and 11 rely on arguing that laws in DBox,Lip,B can be +estimated in a low-regret manner online. We argue this by exploiting the following result, which follows as a +simplification of the results of Wong and Shen on sieve estimators. +Lemma 24. (Adaptation of [WS95, Cor. 1 & Thm. 6])For every P ∈ DBox,Lip,B and t ≥ 1, there exists a +sieve MLE ˆq(·) = ˆq(·; Xt +1) and a constant A > 1 depending only on B such that for every ζ ≥ ζt, +P ∞ +� +KL(p∥ˆq) > 1 +Aζ2 log(1/ζ) +� +≤ A exp +� +−t +ζ2 +A log(1/ζ) +� +, +where ζt = �O(t−1/2(d+2)). +37 + +Proof. The cited results of Wong and Shen apply because densities of laws in DBox,Lip,B are uniformly upper +bounded. This directly yields the entirety of the statement, barring the scale bound on ζt. This scale is +determined by the same entropy integral fixed point equation that appears in Lemma 17, and for this instance, +the bound can be derived by using the standard fact that the Hellinger bracketing entropy of Lipschitz functions +on a box at scale η are controlled as O(η−(d+1)) [Vaa94]. +The sieve estimators in this result can be taken with a fair bit of lassitude. In particular, one explicit choice +is to construct for each ζ > 0 a bracket of the class DBox,Lip,B at scale ζ, and choose a representative density +within each bracket of the class. The sieve MLE then involves choosing a ζ at each time, and estimating the +law as the maximiser of likelihood amongst the aforementioned representative densities. Importantly for us, +the lower brackets in these bracketings can be taken to be uniformly larger than 1/B, and the upper brackets +smaller than B, since p ∈ [1/B, B], and as a result the sieve estimates are uniformly bounded between 1/B +and B. +Below we first show Corollary 15 using the above results, and then show Corollary 11 follows as a simple +consequence of this argument. +Proof of Corollary 15. As argued in the main text, the expected rejection time is bounded as E[τ] ≤ � πt + +O(T0), where T0 = o(dH(p, L)−2(d+3)). We thus only need to show that a sequence πt exists such that � πt is +appropriately small, and that for any t, +P ∞ +� ρt(E ; p) +td2 +H(p, L) ≥ 1 +25 +� +≤ πt, +where p ∈ DBox,Lip,B and E are sieve estimators. We proceed to do so below. +For succinctness, we shall define ε = dH(p, L). Let A be the constant from Lemma 24, and set +T1 := min{t : ζ2 +t log(1/ζt)/A < ε2/200, ζt < 1/√e}. +Further let +ζ(ε) := max{ζ ∈ [0, 1/√e] : ζ2 log(1/ζ) ≤ Aε2/200}. +In the subsequent proof, we shall use Lemma 24 with ζ = ζ(ε) ≥ ζT1. To this end, we note that if ζ(ε) < +1/√e ⇐⇒ Aε2/200 < 1/2e, and in this case the equality ζ(ε)2 log(1/ζ(ε)) = Aε2/200 holds. From this, we +may derive6 that ζ(ε)2 > aε2/ log(1/ε) for some small enough constant a, and so that the exponent of the +upper bound of Lemma 24 is +ζ(ε)2 +A log(1/ζ(ε)) = 400ζ4 +A2ε2 ≥ +ε2 +A′ log(1/ε) +for some large enough constant A′. We shall also assume that A′ ≥ max(1, A). +Let E be a choice of sieve estimators such that for every t, x ∈ [−1, 1]d, ˆqt−1(x) ∈ [1/B, B], which can be +ensured due to the discussion above. Notice, by the independence of the data {Xt}, that for any t, +E[log p(Xt)/ˆqt−1(Xt)|Ft−1] = KL(p∥ˆqt−1). +Let θ ∈ (0, 1) and M ≥ 0 be two parameters of argument that we shall set later. Let us consider the case +of t = T1 + τ for some τ ≥ MT1. +Since for each τ > 0, ζT1+θτ ≤ ζT1 ≤ ζ(ε), the bound of Lemma 24 is effective at each time s ∈ [T1 + θτ : +T1 + τ] with ζ = ζ(ε). As a result, applying Lemma 24 to each s in this range, and exploiting the behaviour +of ζ(ε)2 established above, +P ∞(KL(p∥ˆqs−1) > ε2/200) ≤ A′ exp +� +−s +ε2 +A′ log(1/ε) +� +. +6This equation is equivalent to x log x = y for x = ζ(ε)2, y = Aε2/100 in the range 0 < x < 1/e. +The claim follows +by noting that the map x �→ x log(1/x) is monotonically increasing on [0,1/e], and verifying that for y ∈ [0, 1/2e], +y +2 log(1/y) · +log(2 log(1/y)/y) < y. Indeed, this inequality is equivalent to arguing that log(2 log(1/y)) < log(1/y) +⇐⇒ +y log(1/y) < 1/2, +which holds since the maximum value of y �→ y log(1/y) is 1/e < 1/2. +38 + +Next, by applying the union bound over s ∈ [T1 + θτ : T1 + τ] in the above result, we conclude that +P ∞ +� +∃s ∈ [T1 + θτ : T1 + τ] : KL(p∥ˆqs−1) > ε2 +200 +� +≤ +T1+τ +� +s=T1+θτ +A′ exp +� +−s +ε2 +A′ log(1/ε) +� += A′ exp +� +−(T1 + θτ) +ε2 +A′ log(1/ε) +� +· +1 +1 − exp (−ε2/(A′ log(1/ε)), +≤ A′2 log(1/ε) +ε2 +exp +� +−θτ +ε2 +A′ log(1/ε) +� +. +where the equality sums over the geometric series, and the final inequality uses that T1 ≥ 0 and that for +u < 1, 1/(1 − e−u) ≤ 2 +u. +Next, observe that since for any x ∈ [−1, 1]d, +1 +B2 ≤ +p(x) +ˆqt−1(x) ≤ B2, we have the bound | log(p(Xt)/ˆqt−1(Xt))| ≤ +2 log B. Therefore, the Azuma-Hoeffding inequality is applicable, and yields that for every τ ≥ 1, δ > 0 +P ∞ +� +T1+τ +� +s=T1+θτ +log +p(Xs) +ˆqs−1(Xs) > +T1+τ +� +s=T1+θτ +KL(p∥ˆqs−1) + (τ − θτ)δ +� +≤ exp +� +−(τ − θτ)δ2/8 log2 B +� +. +We proceed by setting δ = ε2/200 in the above, and applying the union bound, to conclude that there +exists a constant C such that +P ∞ +� +T1+τ +� +s=T1+θτ +log +p(Xs) +ˆqs−1(Xs) > (1 − θ)τε2 +100 +� +≤ exp +� +−(1 − θ)τε4 +C log2 B +� ++ C log(1/ε) +ε2 +exp +� +−θτ +ε2 +C log(1/ε) +� +. +(6) +Let us call the right hand side of (6) π(τ, θ). By the definition of ρt, and the boundedness of log +p(x) +ˆqs−1(x) for +every s, it follows that with probability at least 1 − π(τ, θ), +ρT1+τ(E ; p) = +T1+τ +� +s=1 +log +p(Xs) +ˆqs−1(Xs) +≤ 2(T1 + θτ) log B + (1 − θ)τε2 +100 +. +So long as we can choose θ, M such that the upper bound above is smaller than (τ +T1)ε2/25, the inequality +(6) will limit the probability that ρT1+τ > ε2(T1 + τ)/25, which is precisely our goal. But observe that this +indeed occurs if (θ + 1/M) ≤ 3ε2/(200 log B), since in such a case +2(T1 + θτ) log B + (1 − θ)τε2 +100 +≤ τ +� +2(1/M + θ) log B + ε2 +100 +� +≤ τ +� +3ε2 +200 log B · 2 log B + ε2 +100 +��� += τε2 +25 ≤ (T1 + τ)ε2 +25 +. +So, we may set θ = min(1/2, ε2/(100 log B)) and M = max(1, (200 log B)/ε2), and conclude that for any +τ ≥ MT1, it holds that +P ∞(ρT1+τ(E ; p)/tε2 > 1/25) ≤ π(τ), +where, for a constant C′, +π(τ) = exp +� +− +τε4 +C′ log2 B +� ++ C′ log(1/ε) +ε2 +exp +� +− +τε2 +C′ log(1/ε) · log B +� +. +39 + +Note that in the terminology of Theorem 14, πt = π(t − T1) for t ≥ (M + 1)T1. Of course we can always +provide the trivial bound πt ≤ 1 for t < (M + 1)T1. It remains to compute the resulting bound on expected +rejection time. To this end, observe by summing the appropriate geometric series that +� +t≥1 +πt ≤ (M + 1)T1 + +∞ +� +τ=MT1 +π(τ) +≤ (M + 1)T1 + +1 +1 − exp +� +−ε4/C′ log2 B +� + +C′ log(1/ε) +ε2(1 − exp (−ε2/C′ log(1/ε) · log B) +≤ O +� 1 +ε2 +� +T1 + �O +� 1 +ε4 +� +, +where the O bounds are as ε → 0, and we have hidden the dependence on B and log(1/ε). But, since in +Lemma 24, ζt = �O(t−1/2(d+2)), and since T1 is the first time that ζ2 +t log2(1/ζt) ≤ Aε2/200, we may conclude +that T1 = �O(ε−2(d+2)). The claim follows upon noticing that O(ε−2) · T1 = �O(ε−2(d+3)), and recalling that +ε = dH(p, L). +We conclude with a brief proof of Corollary 11 that exploits the bounds developed in the argument above. +Proof of Corollary 11. It suffices to argue that using the estimators in the proof of Corollary 15, for any +P ∈ DBox,Lip,B, +P ∞ +� +lim sup ρt(E ; p) +td2 +H(p, L) ≤ 1 +25 +� += 1. +This follows since for each t, +P ∞ +� ρt(E ; p) +td2 +H(p, L) > 1 +25 +� +≤ πt, +and � πt < ∞, which yields precisely the above relation by the Borel-Cantelli Lemma. +40 +